00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3472 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3083 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.646 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.656 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.667 Checking out Revision f620ee97e10840540f53609861ee9b86caa3c192 (FETCH_HEAD) 00:00:04.667 > git config core.sparsecheckout # timeout=10 00:00:04.676 > git read-tree -mu HEAD # timeout=10 00:00:04.690 > git checkout -f f620ee97e10840540f53609861ee9b86caa3c192 # timeout=5 00:00:04.706 Commit message: "change IP of vertiv1 PDU" 00:00:04.706 > git rev-list --no-walk f620ee97e10840540f53609861ee9b86caa3c192 # timeout=10 00:00:04.807 [Pipeline] Start of Pipeline 00:00:04.822 [Pipeline] library 00:00:04.824 Loading library shm_lib@master 00:00:04.824 Library shm_lib@master is cached. Copying from home. 00:00:04.837 [Pipeline] node 00:00:04.846 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.847 [Pipeline] { 00:00:04.855 [Pipeline] catchError 00:00:04.856 [Pipeline] { 00:00:04.865 [Pipeline] wrap 00:00:04.873 [Pipeline] { 00:00:04.878 [Pipeline] stage 00:00:04.880 [Pipeline] { (Prologue) 00:00:04.894 [Pipeline] echo 00:00:04.895 Node: VM-host-SM9 00:00:04.900 [Pipeline] cleanWs 00:00:04.916 [WS-CLEANUP] Deleting project workspace... 00:00:04.916 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.937 [WS-CLEANUP] done 00:00:05.114 [Pipeline] setCustomBuildProperty 00:00:05.181 [Pipeline] nodesByLabel 00:00:05.182 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.189 [Pipeline] httpRequest 00:00:05.193 HttpMethod: GET 00:00:05.193 URL: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.201 Sending request to url: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.211 Response Code: HTTP/1.1 200 OK 00:00:05.211 Success: Status code 200 is in the accepted range: 200,404 00:00:05.212 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:09.146 [Pipeline] sh 00:00:09.428 + tar --no-same-owner -xf jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:09.445 [Pipeline] httpRequest 00:00:09.449 HttpMethod: GET 00:00:09.450 URL: http://10.211.164.101/packages/spdk_1826c4dc56615c2b755dd180ed09b3b2ada575e9.tar.gz 00:00:09.450 Sending request to url: http://10.211.164.101/packages/spdk_1826c4dc56615c2b755dd180ed09b3b2ada575e9.tar.gz 00:00:09.459 Response Code: HTTP/1.1 200 OK 00:00:09.459 Success: Status code 200 is in the accepted range: 200,404 00:00:09.459 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_1826c4dc56615c2b755dd180ed09b3b2ada575e9.tar.gz 00:00:51.533 [Pipeline] sh 00:00:51.812 + tar --no-same-owner -xf spdk_1826c4dc56615c2b755dd180ed09b3b2ada575e9.tar.gz 00:00:54.358 [Pipeline] sh 00:00:54.638 + git -C spdk log --oneline -n5 00:00:54.638 1826c4dc5 autotest: Run `sw_hotplug` test with ASan enabled 00:00:54.638 b084cba07 lib/blob: fixed potential expression overflow 00:00:54.638 ccad22cf9 test: split interrupt_common.sh 00:00:54.638 d4e4841d1 nvmf/vfio-user: improve mapping failure message 00:00:54.638 3e787bba6 nvmf: initialize sgroup->queued when poll group is created 00:00:54.658 [Pipeline] withCredentials 00:00:54.668 > git --version # timeout=10 00:00:54.680 > git --version # 'git version 2.39.2' 00:00:54.696 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:54.698 [Pipeline] { 00:00:54.708 [Pipeline] retry 00:00:54.710 [Pipeline] { 00:00:54.728 [Pipeline] sh 00:00:55.018 + git ls-remote http://dpdk.org/git/dpdk main 00:00:55.029 [Pipeline] } 00:00:55.051 [Pipeline] // retry 00:00:55.056 [Pipeline] } 00:00:55.076 [Pipeline] // withCredentials 00:00:55.089 [Pipeline] httpRequest 00:00:55.093 HttpMethod: GET 00:00:55.094 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:00:55.094 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:00:55.095 Response Code: HTTP/1.1 200 OK 00:00:55.096 Success: Status code 200 is in the accepted range: 200,404 00:00:55.096 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:01.420 [Pipeline] sh 00:01:01.699 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:03.089 [Pipeline] sh 00:01:03.371 + git -C dpdk log --oneline -n5 00:01:03.371 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:03.371 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:03.371 0efea35a2b app: move alignment attribute on types for MSVC 00:01:03.371 e2e546ab5b version: 24.07-rc0 00:01:03.371 a9778aad62 version: 24.03.0 00:01:03.393 [Pipeline] writeFile 00:01:03.412 [Pipeline] sh 00:01:03.693 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:03.705 [Pipeline] sh 00:01:03.985 + cat autorun-spdk.conf 00:01:03.985 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.985 SPDK_TEST_NVME=1 00:01:03.985 SPDK_TEST_FTL=1 00:01:03.985 SPDK_TEST_ISAL=1 00:01:03.985 SPDK_RUN_ASAN=1 00:01:03.985 SPDK_RUN_UBSAN=1 00:01:03.985 SPDK_TEST_XNVME=1 00:01:03.985 SPDK_TEST_NVME_FDP=1 00:01:03.985 SPDK_TEST_NATIVE_DPDK=main 00:01:03.985 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:03.985 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.993 RUN_NIGHTLY=1 00:01:03.995 [Pipeline] } 00:01:04.012 [Pipeline] // stage 00:01:04.028 [Pipeline] stage 00:01:04.030 [Pipeline] { (Run VM) 00:01:04.045 [Pipeline] sh 00:01:04.328 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:04.328 + echo 'Start stage prepare_nvme.sh' 00:01:04.328 Start stage prepare_nvme.sh 00:01:04.328 + [[ -n 1 ]] 00:01:04.328 + disk_prefix=ex1 00:01:04.328 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:04.328 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:04.328 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:04.328 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.328 ++ SPDK_TEST_NVME=1 00:01:04.328 ++ SPDK_TEST_FTL=1 00:01:04.328 ++ SPDK_TEST_ISAL=1 00:01:04.328 ++ SPDK_RUN_ASAN=1 00:01:04.328 ++ SPDK_RUN_UBSAN=1 00:01:04.328 ++ SPDK_TEST_XNVME=1 00:01:04.328 ++ SPDK_TEST_NVME_FDP=1 00:01:04.328 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:04.328 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.328 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.328 ++ RUN_NIGHTLY=1 00:01:04.328 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:04.328 + nvme_files=() 00:01:04.328 + declare -A nvme_files 00:01:04.328 + backend_dir=/var/lib/libvirt/images/backends 00:01:04.328 + nvme_files['nvme.img']=5G 00:01:04.328 + nvme_files['nvme-cmb.img']=5G 00:01:04.328 + nvme_files['nvme-multi0.img']=4G 00:01:04.328 + nvme_files['nvme-multi1.img']=4G 00:01:04.328 + nvme_files['nvme-multi2.img']=4G 00:01:04.328 + nvme_files['nvme-openstack.img']=8G 00:01:04.328 + nvme_files['nvme-zns.img']=5G 00:01:04.328 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:04.328 + (( SPDK_TEST_FTL == 1 )) 00:01:04.328 + nvme_files["nvme-ftl.img"]=6G 00:01:04.328 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:04.328 + nvme_files["nvme-fdp.img"]=1G 00:01:04.328 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:04.328 + for nvme in "${!nvme_files[@]}" 00:01:04.328 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:04.328 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.328 + for nvme in "${!nvme_files[@]}" 00:01:04.328 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:04.328 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:04.328 + for nvme in "${!nvme_files[@]}" 00:01:04.328 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:04.587 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.587 + for nvme in "${!nvme_files[@]}" 00:01:04.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:04.587 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:04.587 + for nvme in "${!nvme_files[@]}" 00:01:04.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:04.587 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.587 + for nvme in "${!nvme_files[@]}" 00:01:04.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:04.587 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.587 + for nvme in "${!nvme_files[@]}" 00:01:04.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:04.587 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.587 + for nvme in "${!nvme_files[@]}" 00:01:04.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:04.846 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:04.846 + for nvme in "${!nvme_files[@]}" 00:01:04.847 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:04.847 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.847 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:04.847 + echo 'End stage prepare_nvme.sh' 00:01:04.847 End stage prepare_nvme.sh 00:01:04.858 [Pipeline] sh 00:01:05.139 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.139 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:05.399 00:01:05.399 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:05.399 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:05.399 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:05.399 HELP=0 00:01:05.399 DRY_RUN=0 00:01:05.399 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:05.399 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:05.399 NVME_AUTO_CREATE=0 00:01:05.399 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:05.399 NVME_CMB=,,,, 00:01:05.399 NVME_PMR=,,,, 00:01:05.399 NVME_ZNS=,,,, 00:01:05.399 NVME_MS=true,,,, 00:01:05.399 NVME_FDP=,,,on, 00:01:05.399 SPDK_VAGRANT_DISTRO=fedora38 00:01:05.399 SPDK_VAGRANT_VMCPU=10 00:01:05.399 SPDK_VAGRANT_VMRAM=12288 00:01:05.399 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.399 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.399 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.399 SPDK_OPENSTACK_NETWORK=0 00:01:05.399 VAGRANT_PACKAGE_BOX=0 00:01:05.399 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:05.399 FORCE_DISTRO=true 00:01:05.399 VAGRANT_BOX_VERSION= 00:01:05.399 EXTRA_VAGRANTFILES= 00:01:05.399 NIC_MODEL=e1000 00:01:05.399 00:01:05.399 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:05.399 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:08.687 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.944 ==> default: Creating image (snapshot of base box volume). 00:01:09.202 ==> default: Creating domain with the following settings... 00:01:09.202 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715654874_871d45492b50a77514a3 00:01:09.202 ==> default: -- Domain type: kvm 00:01:09.202 ==> default: -- Cpus: 10 00:01:09.202 ==> default: -- Feature: acpi 00:01:09.202 ==> default: -- Feature: apic 00:01:09.202 ==> default: -- Feature: pae 00:01:09.202 ==> default: -- Memory: 12288M 00:01:09.202 ==> default: -- Memory Backing: hugepages: 00:01:09.202 ==> default: -- Management MAC: 00:01:09.202 ==> default: -- Loader: 00:01:09.202 ==> default: -- Nvram: 00:01:09.202 ==> default: -- Base box: spdk/fedora38 00:01:09.202 ==> default: -- Storage pool: default 00:01:09.202 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715654874_871d45492b50a77514a3.img (20G) 00:01:09.202 ==> default: -- Volume Cache: default 00:01:09.202 ==> default: -- Kernel: 00:01:09.202 ==> default: -- Initrd: 00:01:09.202 ==> default: -- Graphics Type: vnc 00:01:09.202 ==> default: -- Graphics Port: -1 00:01:09.202 ==> default: -- Graphics IP: 127.0.0.1 00:01:09.202 ==> default: -- Graphics Password: Not defined 00:01:09.202 ==> default: -- Video Type: cirrus 00:01:09.202 ==> default: -- Video VRAM: 9216 00:01:09.202 ==> default: -- Sound Type: 00:01:09.202 ==> default: -- Keymap: en-us 00:01:09.202 ==> default: -- TPM Path: 00:01:09.202 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:09.202 ==> default: -- Command line args: 00:01:09.202 ==> default: -> value=-device, 00:01:09.202 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:09.202 ==> default: -> value=-drive, 00:01:09.202 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:09.202 ==> default: -> value=-device, 00:01:09.202 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:09.202 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:09.203 ==> default: -> value=-drive, 00:01:09.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:09.203 ==> default: -> value=-drive, 00:01:09.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.203 ==> default: -> value=-drive, 00:01:09.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.203 ==> default: -> value=-drive, 00:01:09.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:09.203 ==> default: -> value=-drive, 00:01:09.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:09.203 ==> default: -> value=-device, 00:01:09.203 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.203 ==> default: Creating shared folders metadata... 00:01:09.203 ==> default: Starting domain. 00:01:10.583 ==> default: Waiting for domain to get an IP address... 00:01:28.670 ==> default: Waiting for SSH to become available... 00:01:30.045 ==> default: Configuring and enabling network interfaces... 00:01:34.237 default: SSH address: 192.168.121.63:22 00:01:34.237 default: SSH username: vagrant 00:01:34.237 default: SSH auth method: private key 00:01:36.143 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.298 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:49.567 ==> default: Mounting SSHFS shared folder... 00:01:50.502 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:50.502 ==> default: Checking Mount.. 00:01:51.879 ==> default: Folder Successfully Mounted! 00:01:51.879 ==> default: Running provisioner: file... 00:01:52.817 default: ~/.gitconfig => .gitconfig 00:01:53.076 00:01:53.076 SUCCESS! 00:01:53.076 00:01:53.076 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:53.076 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.076 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:53.076 00:01:53.086 [Pipeline] } 00:01:53.104 [Pipeline] // stage 00:01:53.114 [Pipeline] dir 00:01:53.114 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:53.116 [Pipeline] { 00:01:53.132 [Pipeline] catchError 00:01:53.134 [Pipeline] { 00:01:53.150 [Pipeline] sh 00:01:53.430 + vagrant ssh-config --host vagrant 00:01:53.430 + sed -ne /^Host/,$p 00:01:53.430 + tee ssh_conf 00:01:56.717 Host vagrant 00:01:56.717 HostName 192.168.121.63 00:01:56.717 User vagrant 00:01:56.717 Port 22 00:01:56.717 UserKnownHostsFile /dev/null 00:01:56.717 StrictHostKeyChecking no 00:01:56.717 PasswordAuthentication no 00:01:56.717 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:56.717 IdentitiesOnly yes 00:01:56.717 LogLevel FATAL 00:01:56.717 ForwardAgent yes 00:01:56.717 ForwardX11 yes 00:01:56.717 00:01:56.731 [Pipeline] withEnv 00:01:56.733 [Pipeline] { 00:01:56.749 [Pipeline] sh 00:01:57.029 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:57.029 source /etc/os-release 00:01:57.029 [[ -e /image.version ]] && img=$(< /image.version) 00:01:57.029 # Minimal, systemd-like check. 00:01:57.029 if [[ -e /.dockerenv ]]; then 00:01:57.029 # Clear garbage from the node's name: 00:01:57.029 # agt-er_autotest_547-896 -> autotest_547-896 00:01:57.029 # $HOSTNAME is the actual container id 00:01:57.029 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:57.029 if mountpoint -q /etc/hostname; then 00:01:57.029 # We can assume this is a mount from a host where container is running, 00:01:57.029 # so fetch its hostname to easily identify the target swarm worker. 00:01:57.029 container="$(< /etc/hostname) ($agent)" 00:01:57.029 else 00:01:57.029 # Fallback 00:01:57.029 container=$agent 00:01:57.029 fi 00:01:57.029 fi 00:01:57.029 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:57.029 00:01:57.301 [Pipeline] } 00:01:57.321 [Pipeline] // withEnv 00:01:57.331 [Pipeline] setCustomBuildProperty 00:01:57.346 [Pipeline] stage 00:01:57.349 [Pipeline] { (Tests) 00:01:57.369 [Pipeline] sh 00:01:57.665 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:57.677 [Pipeline] timeout 00:01:57.677 Timeout set to expire in 40 min 00:01:57.678 [Pipeline] { 00:01:57.690 [Pipeline] sh 00:01:57.963 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:58.529 HEAD is now at 1826c4dc5 autotest: Run `sw_hotplug` test with ASan enabled 00:01:58.542 [Pipeline] sh 00:01:58.822 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:59.096 [Pipeline] sh 00:01:59.376 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:59.650 [Pipeline] sh 00:01:59.932 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:00.190 ++ readlink -f spdk_repo 00:02:00.191 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:00.191 + [[ -n /home/vagrant/spdk_repo ]] 00:02:00.191 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:00.191 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:00.191 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:00.191 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:00.191 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:00.191 + cd /home/vagrant/spdk_repo 00:02:00.191 + source /etc/os-release 00:02:00.191 ++ NAME='Fedora Linux' 00:02:00.191 ++ VERSION='38 (Cloud Edition)' 00:02:00.191 ++ ID=fedora 00:02:00.191 ++ VERSION_ID=38 00:02:00.191 ++ VERSION_CODENAME= 00:02:00.191 ++ PLATFORM_ID=platform:f38 00:02:00.191 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:00.191 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.191 ++ LOGO=fedora-logo-icon 00:02:00.191 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:00.191 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.191 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:00.191 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.191 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.191 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.191 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:00.191 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.191 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:00.191 ++ SUPPORT_END=2024-05-14 00:02:00.191 ++ VARIANT='Cloud Edition' 00:02:00.191 ++ VARIANT_ID=cloud 00:02:00.191 + uname -a 00:02:00.191 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:00.191 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:00.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:00.708 Hugepages 00:02:00.708 node hugesize free / total 00:02:00.708 node0 1048576kB 0 / 0 00:02:00.708 node0 2048kB 0 / 0 00:02:00.708 00:02:00.708 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:00.708 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:00.967 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:00.967 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:00.967 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:00.967 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:00.967 + rm -f /tmp/spdk-ld-path 00:02:00.967 + source autorun-spdk.conf 00:02:00.967 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.967 ++ SPDK_TEST_NVME=1 00:02:00.967 ++ SPDK_TEST_FTL=1 00:02:00.967 ++ SPDK_TEST_ISAL=1 00:02:00.967 ++ SPDK_RUN_ASAN=1 00:02:00.967 ++ SPDK_RUN_UBSAN=1 00:02:00.967 ++ SPDK_TEST_XNVME=1 00:02:00.967 ++ SPDK_TEST_NVME_FDP=1 00:02:00.967 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:00.967 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.967 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.967 ++ RUN_NIGHTLY=1 00:02:00.967 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:00.967 + [[ -n '' ]] 00:02:00.967 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:00.967 + for M in /var/spdk/build-*-manifest.txt 00:02:00.967 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:00.967 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.967 + for M in /var/spdk/build-*-manifest.txt 00:02:00.967 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:00.967 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.967 ++ uname 00:02:00.967 + [[ Linux == \L\i\n\u\x ]] 00:02:00.967 + sudo dmesg -T 00:02:00.967 + sudo dmesg --clear 00:02:00.967 + dmesg_pid=5922 00:02:00.967 + [[ Fedora Linux == FreeBSD ]] 00:02:00.967 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.967 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.967 + sudo dmesg -Tw 00:02:00.967 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:00.967 + [[ -x /usr/src/fio-static/fio ]] 00:02:00.967 + export FIO_BIN=/usr/src/fio-static/fio 00:02:00.967 + FIO_BIN=/usr/src/fio-static/fio 00:02:00.967 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:00.967 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:00.967 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:00.968 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.968 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.968 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:00.968 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.968 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.968 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:00.968 Test configuration: 00:02:00.968 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.968 SPDK_TEST_NVME=1 00:02:00.968 SPDK_TEST_FTL=1 00:02:00.968 SPDK_TEST_ISAL=1 00:02:00.968 SPDK_RUN_ASAN=1 00:02:00.968 SPDK_RUN_UBSAN=1 00:02:00.968 SPDK_TEST_XNVME=1 00:02:00.968 SPDK_TEST_NVME_FDP=1 00:02:00.968 SPDK_TEST_NATIVE_DPDK=main 00:02:00.968 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.968 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.227 RUN_NIGHTLY=1 02:48:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:01.227 02:48:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.227 02:48:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.227 02:48:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.227 02:48:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.227 02:48:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.227 02:48:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.227 02:48:47 -- paths/export.sh@5 -- $ export PATH 00:02:01.227 02:48:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.227 02:48:47 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:01.227 02:48:47 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:01.227 02:48:47 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715654927.XXXXXX 00:02:01.227 02:48:47 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715654927.0Zv7UQ 00:02:01.227 02:48:47 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:01.227 02:48:47 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:02:01.227 02:48:47 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:01.227 02:48:47 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:01.227 02:48:47 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:01.227 02:48:47 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.227 02:48:47 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:01.227 02:48:47 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:01.227 02:48:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.227 02:48:47 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:01.227 02:48:47 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:01.227 02:48:47 -- pm/common@17 -- $ local monitor 00:02:01.227 02:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.227 02:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.227 02:48:47 -- pm/common@25 -- $ sleep 1 00:02:01.227 02:48:47 -- pm/common@21 -- $ date +%s 00:02:01.227 02:48:47 -- pm/common@21 -- $ date +%s 00:02:01.227 02:48:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715654927 00:02:01.227 02:48:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715654927 00:02:01.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715654927_collect-vmstat.pm.log 00:02:01.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715654927_collect-cpu-load.pm.log 00:02:02.184 02:48:48 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:02.184 02:48:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.184 02:48:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.184 02:48:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:02.184 02:48:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.184 Tue May 14 02:48:48 AM UTC 2024 00:02:02.184 02:48:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.184 v24.05-pre-600-g1826c4dc5 00:02:02.184 02:48:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:02.184 02:48:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:02.184 02:48:48 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:02.184 02:48:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:02.185 02:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.185 ************************************ 00:02:02.185 START TEST asan 00:02:02.185 ************************************ 00:02:02.185 using asan 00:02:02.185 02:48:48 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:02.185 00:02:02.185 real 0m0.000s 00:02:02.185 user 0m0.000s 00:02:02.185 sys 0m0.000s 00:02:02.185 02:48:48 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:02.185 02:48:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.185 ************************************ 00:02:02.185 END TEST asan 00:02:02.185 ************************************ 00:02:02.185 02:48:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.185 02:48:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.185 02:48:48 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:02.185 02:48:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:02.185 02:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.185 ************************************ 00:02:02.185 START TEST ubsan 00:02:02.185 ************************************ 00:02:02.185 using ubsan 00:02:02.185 02:48:48 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:02.185 00:02:02.185 real 0m0.000s 00:02:02.185 user 0m0.000s 00:02:02.185 sys 0m0.000s 00:02:02.185 02:48:48 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:02.185 02:48:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.185 ************************************ 00:02:02.185 END TEST ubsan 00:02:02.185 ************************************ 00:02:02.443 02:48:48 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:02.443 02:48:48 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:02.443 02:48:48 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:02.443 02:48:48 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:02.443 02:48:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:02.443 02:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.443 ************************************ 00:02:02.443 START TEST build_native_dpdk 00:02:02.443 ************************************ 00:02:02.443 02:48:48 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:02.443 02:48:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:02.444 7e06c0de19 examples: move alignment attribute on types for MSVC 00:02:02.444 27595cd830 drivers: move alignment attribute on types for MSVC 00:02:02.444 0efea35a2b app: move alignment attribute on types for MSVC 00:02:02.444 e2e546ab5b version: 24.07-rc0 00:02:02.444 a9778aad62 version: 24.03.0 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:02.444 02:48:48 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:02.444 patching file config/rte_config.h 00:02:02.444 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:02.444 02:48:48 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:07.715 The Meson build system 00:02:07.715 Version: 1.3.1 00:02:07.715 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:07.715 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:07.715 Build type: native build 00:02:07.715 Program cat found: YES (/usr/bin/cat) 00:02:07.715 Project name: DPDK 00:02:07.715 Project version: 24.07.0-rc0 00:02:07.715 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.715 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:07.715 Host machine cpu family: x86_64 00:02:07.715 Host machine cpu: x86_64 00:02:07.715 Message: ## Building in Developer Mode ## 00:02:07.715 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.715 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:07.715 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.715 Program python3 found: YES (/usr/bin/python3) 00:02:07.715 Program cat found: YES (/usr/bin/cat) 00:02:07.715 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:07.715 Compiler for C supports arguments -march=native: YES 00:02:07.715 Checking for size of "void *" : 8 00:02:07.715 Checking for size of "void *" : 8 (cached) 00:02:07.715 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:07.715 Library m found: YES 00:02:07.715 Library numa found: YES 00:02:07.715 Has header "numaif.h" : YES 00:02:07.715 Library fdt found: NO 00:02:07.715 Library execinfo found: NO 00:02:07.715 Has header "execinfo.h" : YES 00:02:07.715 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.715 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.715 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.715 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.715 Run-time dependency openssl found: YES 3.0.9 00:02:07.715 Run-time dependency libpcap found: YES 1.10.4 00:02:07.715 Has header "pcap.h" with dependency libpcap: YES 00:02:07.715 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.715 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.715 Compiler for C supports arguments -Wformat: YES 00:02:07.715 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.716 Compiler for C supports arguments -Wformat-security: NO 00:02:07.716 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.716 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.716 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.716 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.716 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.716 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.716 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.716 Compiler for C supports arguments -Wundef: YES 00:02:07.716 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.716 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.716 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.716 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.716 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.716 Program objdump found: YES (/usr/bin/objdump) 00:02:07.716 Compiler for C supports arguments -mavx512f: YES 00:02:07.716 Checking if "AVX512 checking" compiles: YES 00:02:07.716 Fetching value of define "__SSE4_2__" : 1 00:02:07.716 Fetching value of define "__AES__" : 1 00:02:07.716 Fetching value of define "__AVX__" : 1 00:02:07.716 Fetching value of define "__AVX2__" : 1 00:02:07.716 Fetching value of define "__AVX512BW__" : (undefined) 00:02:07.716 Fetching value of define "__AVX512CD__" : (undefined) 00:02:07.716 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:07.716 Fetching value of define "__AVX512F__" : (undefined) 00:02:07.716 Fetching value of define "__AVX512VL__" : (undefined) 00:02:07.716 Fetching value of define "__PCLMUL__" : 1 00:02:07.716 Fetching value of define "__RDRND__" : 1 00:02:07.716 Fetching value of define "__RDSEED__" : 1 00:02:07.716 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.716 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.716 Message: lib/log: Defining dependency "log" 00:02:07.716 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.716 Message: lib/argparse: Defining dependency "argparse" 00:02:07.716 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.716 Checking for function "getentropy" : NO 00:02:07.716 Message: lib/eal: Defining dependency "eal" 00:02:07.716 Message: lib/ring: Defining dependency "ring" 00:02:07.716 Message: lib/rcu: Defining dependency "rcu" 00:02:07.716 Message: lib/mempool: Defining dependency "mempool" 00:02:07.716 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.716 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.716 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.716 Compiler for C supports arguments -mpclmul: YES 00:02:07.716 Compiler for C supports arguments -maes: YES 00:02:07.716 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.716 Compiler for C supports arguments -mavx512bw: YES 00:02:07.716 Compiler for C supports arguments -mavx512dq: YES 00:02:07.716 Compiler for C supports arguments -mavx512vl: YES 00:02:07.716 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.716 Compiler for C supports arguments -mavx2: YES 00:02:07.716 Compiler for C supports arguments -mavx: YES 00:02:07.716 Message: lib/net: Defining dependency "net" 00:02:07.716 Message: lib/meter: Defining dependency "meter" 00:02:07.716 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.716 Message: lib/pci: Defining dependency "pci" 00:02:07.716 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.716 Message: lib/metrics: Defining dependency "metrics" 00:02:07.716 Message: lib/hash: Defining dependency "hash" 00:02:07.716 Message: lib/timer: Defining dependency "timer" 00:02:07.716 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.716 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:07.716 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:07.716 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:07.716 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:07.716 Message: lib/acl: Defining dependency "acl" 00:02:07.716 Message: lib/bbdev: Defining dependency "bbdev" 00:02:07.716 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:07.716 Run-time dependency libelf found: YES 0.190 00:02:07.716 Message: lib/bpf: Defining dependency "bpf" 00:02:07.716 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:07.716 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.716 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.716 Message: lib/distributor: Defining dependency "distributor" 00:02:07.716 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.716 Message: lib/efd: Defining dependency "efd" 00:02:07.716 Message: lib/eventdev: Defining dependency "eventdev" 00:02:07.716 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:07.716 Message: lib/gpudev: Defining dependency "gpudev" 00:02:07.716 Message: lib/gro: Defining dependency "gro" 00:02:07.716 Message: lib/gso: Defining dependency "gso" 00:02:07.716 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:07.716 Message: lib/jobstats: Defining dependency "jobstats" 00:02:07.716 Message: lib/latencystats: Defining dependency "latencystats" 00:02:07.716 Message: lib/lpm: Defining dependency "lpm" 00:02:07.716 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.716 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:07.716 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:07.716 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:07.716 Message: lib/member: Defining dependency "member" 00:02:07.716 Message: lib/pcapng: Defining dependency "pcapng" 00:02:07.716 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.716 Message: lib/power: Defining dependency "power" 00:02:07.716 Message: lib/rawdev: Defining dependency "rawdev" 00:02:07.716 Message: lib/regexdev: Defining dependency "regexdev" 00:02:07.716 Message: lib/mldev: Defining dependency "mldev" 00:02:07.716 Message: lib/rib: Defining dependency "rib" 00:02:07.716 Message: lib/reorder: Defining dependency "reorder" 00:02:07.716 Message: lib/sched: Defining dependency "sched" 00:02:07.716 Message: lib/security: Defining dependency "security" 00:02:07.716 Message: lib/stack: Defining dependency "stack" 00:02:07.716 Has header "linux/userfaultfd.h" : YES 00:02:07.716 Has header "linux/vduse.h" : YES 00:02:07.716 Message: lib/vhost: Defining dependency "vhost" 00:02:07.716 Message: lib/ipsec: Defining dependency "ipsec" 00:02:07.716 Message: lib/pdcp: Defining dependency "pdcp" 00:02:07.716 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.716 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:07.716 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:07.716 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:07.716 Message: lib/fib: Defining dependency "fib" 00:02:07.716 Message: lib/port: Defining dependency "port" 00:02:07.716 Message: lib/pdump: Defining dependency "pdump" 00:02:07.716 Message: lib/table: Defining dependency "table" 00:02:07.716 Message: lib/pipeline: Defining dependency "pipeline" 00:02:07.716 Message: lib/graph: Defining dependency "graph" 00:02:07.716 Message: lib/node: Defining dependency "node" 00:02:07.716 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.716 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.716 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.114 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.114 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:09.114 Compiler for C supports arguments -Wno-unused-value: YES 00:02:09.114 Compiler for C supports arguments -Wno-format: YES 00:02:09.114 Compiler for C supports arguments -Wno-format-security: YES 00:02:09.114 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:09.114 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:09.114 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:09.114 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:09.114 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.114 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.114 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.114 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:09.114 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:09.114 Has header "sys/epoll.h" : YES 00:02:09.114 Program doxygen found: YES (/usr/bin/doxygen) 00:02:09.114 Configuring doxy-api-html.conf using configuration 00:02:09.114 Configuring doxy-api-man.conf using configuration 00:02:09.114 Program mandb found: YES (/usr/bin/mandb) 00:02:09.114 Program sphinx-build found: NO 00:02:09.114 Configuring rte_build_config.h using configuration 00:02:09.114 Message: 00:02:09.114 ================= 00:02:09.114 Applications Enabled 00:02:09.114 ================= 00:02:09.114 00:02:09.114 apps: 00:02:09.114 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:09.114 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:09.114 test-pmd, test-regex, test-sad, test-security-perf, 00:02:09.114 00:02:09.114 Message: 00:02:09.114 ================= 00:02:09.114 Libraries Enabled 00:02:09.114 ================= 00:02:09.114 00:02:09.114 libs: 00:02:09.114 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:02:09.114 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:02:09.114 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:02:09.114 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:02:09.114 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:02:09.114 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:02:09.114 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:02:09.114 node, 00:02:09.114 00:02:09.114 Message: 00:02:09.114 =============== 00:02:09.114 Drivers Enabled 00:02:09.114 =============== 00:02:09.114 00:02:09.114 common: 00:02:09.114 00:02:09.114 bus: 00:02:09.114 pci, vdev, 00:02:09.114 mempool: 00:02:09.114 ring, 00:02:09.114 dma: 00:02:09.114 00:02:09.114 net: 00:02:09.114 i40e, 00:02:09.114 raw: 00:02:09.114 00:02:09.114 crypto: 00:02:09.114 00:02:09.114 compress: 00:02:09.114 00:02:09.114 regex: 00:02:09.114 00:02:09.114 ml: 00:02:09.114 00:02:09.114 vdpa: 00:02:09.114 00:02:09.114 event: 00:02:09.114 00:02:09.114 baseband: 00:02:09.114 00:02:09.114 gpu: 00:02:09.114 00:02:09.114 00:02:09.114 Message: 00:02:09.114 ================= 00:02:09.114 Content Skipped 00:02:09.114 ================= 00:02:09.114 00:02:09.114 apps: 00:02:09.114 00:02:09.114 libs: 00:02:09.114 00:02:09.114 drivers: 00:02:09.114 common/cpt: not in enabled drivers build config 00:02:09.114 common/dpaax: not in enabled drivers build config 00:02:09.114 common/iavf: not in enabled drivers build config 00:02:09.114 common/idpf: not in enabled drivers build config 00:02:09.114 common/ionic: not in enabled drivers build config 00:02:09.114 common/mvep: not in enabled drivers build config 00:02:09.114 common/octeontx: not in enabled drivers build config 00:02:09.114 bus/auxiliary: not in enabled drivers build config 00:02:09.114 bus/cdx: not in enabled drivers build config 00:02:09.114 bus/dpaa: not in enabled drivers build config 00:02:09.114 bus/fslmc: not in enabled drivers build config 00:02:09.114 bus/ifpga: not in enabled drivers build config 00:02:09.114 bus/platform: not in enabled drivers build config 00:02:09.114 bus/uacce: not in enabled drivers build config 00:02:09.114 bus/vmbus: not in enabled drivers build config 00:02:09.114 common/cnxk: not in enabled drivers build config 00:02:09.114 common/mlx5: not in enabled drivers build config 00:02:09.114 common/nfp: not in enabled drivers build config 00:02:09.114 common/nitrox: not in enabled drivers build config 00:02:09.114 common/qat: not in enabled drivers build config 00:02:09.114 common/sfc_efx: not in enabled drivers build config 00:02:09.114 mempool/bucket: not in enabled drivers build config 00:02:09.114 mempool/cnxk: not in enabled drivers build config 00:02:09.114 mempool/dpaa: not in enabled drivers build config 00:02:09.114 mempool/dpaa2: not in enabled drivers build config 00:02:09.114 mempool/octeontx: not in enabled drivers build config 00:02:09.114 mempool/stack: not in enabled drivers build config 00:02:09.114 dma/cnxk: not in enabled drivers build config 00:02:09.114 dma/dpaa: not in enabled drivers build config 00:02:09.114 dma/dpaa2: not in enabled drivers build config 00:02:09.114 dma/hisilicon: not in enabled drivers build config 00:02:09.114 dma/idxd: not in enabled drivers build config 00:02:09.114 dma/ioat: not in enabled drivers build config 00:02:09.114 dma/skeleton: not in enabled drivers build config 00:02:09.114 net/af_packet: not in enabled drivers build config 00:02:09.114 net/af_xdp: not in enabled drivers build config 00:02:09.114 net/ark: not in enabled drivers build config 00:02:09.114 net/atlantic: not in enabled drivers build config 00:02:09.114 net/avp: not in enabled drivers build config 00:02:09.114 net/axgbe: not in enabled drivers build config 00:02:09.114 net/bnx2x: not in enabled drivers build config 00:02:09.114 net/bnxt: not in enabled drivers build config 00:02:09.114 net/bonding: not in enabled drivers build config 00:02:09.114 net/cnxk: not in enabled drivers build config 00:02:09.114 net/cpfl: not in enabled drivers build config 00:02:09.114 net/cxgbe: not in enabled drivers build config 00:02:09.114 net/dpaa: not in enabled drivers build config 00:02:09.114 net/dpaa2: not in enabled drivers build config 00:02:09.114 net/e1000: not in enabled drivers build config 00:02:09.114 net/ena: not in enabled drivers build config 00:02:09.114 net/enetc: not in enabled drivers build config 00:02:09.114 net/enetfec: not in enabled drivers build config 00:02:09.114 net/enic: not in enabled drivers build config 00:02:09.114 net/failsafe: not in enabled drivers build config 00:02:09.114 net/fm10k: not in enabled drivers build config 00:02:09.114 net/gve: not in enabled drivers build config 00:02:09.114 net/hinic: not in enabled drivers build config 00:02:09.114 net/hns3: not in enabled drivers build config 00:02:09.114 net/iavf: not in enabled drivers build config 00:02:09.114 net/ice: not in enabled drivers build config 00:02:09.114 net/idpf: not in enabled drivers build config 00:02:09.114 net/igc: not in enabled drivers build config 00:02:09.114 net/ionic: not in enabled drivers build config 00:02:09.114 net/ipn3ke: not in enabled drivers build config 00:02:09.114 net/ixgbe: not in enabled drivers build config 00:02:09.114 net/mana: not in enabled drivers build config 00:02:09.114 net/memif: not in enabled drivers build config 00:02:09.114 net/mlx4: not in enabled drivers build config 00:02:09.114 net/mlx5: not in enabled drivers build config 00:02:09.114 net/mvneta: not in enabled drivers build config 00:02:09.114 net/mvpp2: not in enabled drivers build config 00:02:09.114 net/netvsc: not in enabled drivers build config 00:02:09.114 net/nfb: not in enabled drivers build config 00:02:09.114 net/nfp: not in enabled drivers build config 00:02:09.114 net/ngbe: not in enabled drivers build config 00:02:09.114 net/null: not in enabled drivers build config 00:02:09.114 net/octeontx: not in enabled drivers build config 00:02:09.114 net/octeon_ep: not in enabled drivers build config 00:02:09.114 net/pcap: not in enabled drivers build config 00:02:09.114 net/pfe: not in enabled drivers build config 00:02:09.114 net/qede: not in enabled drivers build config 00:02:09.114 net/ring: not in enabled drivers build config 00:02:09.114 net/sfc: not in enabled drivers build config 00:02:09.114 net/softnic: not in enabled drivers build config 00:02:09.114 net/tap: not in enabled drivers build config 00:02:09.114 net/thunderx: not in enabled drivers build config 00:02:09.114 net/txgbe: not in enabled drivers build config 00:02:09.114 net/vdev_netvsc: not in enabled drivers build config 00:02:09.114 net/vhost: not in enabled drivers build config 00:02:09.114 net/virtio: not in enabled drivers build config 00:02:09.114 net/vmxnet3: not in enabled drivers build config 00:02:09.114 raw/cnxk_bphy: not in enabled drivers build config 00:02:09.114 raw/cnxk_gpio: not in enabled drivers build config 00:02:09.114 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:09.114 raw/ifpga: not in enabled drivers build config 00:02:09.114 raw/ntb: not in enabled drivers build config 00:02:09.114 raw/skeleton: not in enabled drivers build config 00:02:09.114 crypto/armv8: not in enabled drivers build config 00:02:09.114 crypto/bcmfs: not in enabled drivers build config 00:02:09.114 crypto/caam_jr: not in enabled drivers build config 00:02:09.114 crypto/ccp: not in enabled drivers build config 00:02:09.114 crypto/cnxk: not in enabled drivers build config 00:02:09.114 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.115 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.115 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.115 crypto/mlx5: not in enabled drivers build config 00:02:09.115 crypto/mvsam: not in enabled drivers build config 00:02:09.115 crypto/nitrox: not in enabled drivers build config 00:02:09.115 crypto/null: not in enabled drivers build config 00:02:09.115 crypto/octeontx: not in enabled drivers build config 00:02:09.115 crypto/openssl: not in enabled drivers build config 00:02:09.115 crypto/scheduler: not in enabled drivers build config 00:02:09.115 crypto/uadk: not in enabled drivers build config 00:02:09.115 crypto/virtio: not in enabled drivers build config 00:02:09.115 compress/isal: not in enabled drivers build config 00:02:09.115 compress/mlx5: not in enabled drivers build config 00:02:09.115 compress/nitrox: not in enabled drivers build config 00:02:09.115 compress/octeontx: not in enabled drivers build config 00:02:09.115 compress/zlib: not in enabled drivers build config 00:02:09.115 regex/mlx5: not in enabled drivers build config 00:02:09.115 regex/cn9k: not in enabled drivers build config 00:02:09.115 ml/cnxk: not in enabled drivers build config 00:02:09.115 vdpa/ifc: not in enabled drivers build config 00:02:09.115 vdpa/mlx5: not in enabled drivers build config 00:02:09.115 vdpa/nfp: not in enabled drivers build config 00:02:09.115 vdpa/sfc: not in enabled drivers build config 00:02:09.115 event/cnxk: not in enabled drivers build config 00:02:09.115 event/dlb2: not in enabled drivers build config 00:02:09.115 event/dpaa: not in enabled drivers build config 00:02:09.115 event/dpaa2: not in enabled drivers build config 00:02:09.115 event/dsw: not in enabled drivers build config 00:02:09.115 event/opdl: not in enabled drivers build config 00:02:09.115 event/skeleton: not in enabled drivers build config 00:02:09.115 event/sw: not in enabled drivers build config 00:02:09.115 event/octeontx: not in enabled drivers build config 00:02:09.115 baseband/acc: not in enabled drivers build config 00:02:09.115 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:09.115 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:09.115 baseband/la12xx: not in enabled drivers build config 00:02:09.115 baseband/null: not in enabled drivers build config 00:02:09.115 baseband/turbo_sw: not in enabled drivers build config 00:02:09.115 gpu/cuda: not in enabled drivers build config 00:02:09.115 00:02:09.115 00:02:09.115 Build targets in project: 224 00:02:09.115 00:02:09.115 DPDK 24.07.0-rc0 00:02:09.115 00:02:09.115 User defined options 00:02:09.115 libdir : lib 00:02:09.115 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:09.115 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:09.115 c_link_args : 00:02:09.115 enable_docs : false 00:02:09.115 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.115 enable_kmods : false 00:02:09.115 machine : native 00:02:09.115 tests : false 00:02:09.115 00:02:09.115 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.115 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:09.115 02:48:55 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:09.115 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:09.374 [1/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.374 [2/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.374 [3/722] Linking static target lib/librte_kvargs.a 00:02:09.374 [4/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.374 [5/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.374 [6/722] Linking static target lib/librte_log.a 00:02:09.633 [7/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:09.633 [8/722] Linking static target lib/librte_argparse.a 00:02:09.633 [9/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.892 [10/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.892 [11/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.892 [12/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.892 [13/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.892 [14/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.892 [15/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.892 [16/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.892 [17/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.892 [18/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.892 [19/722] Linking target lib/librte_log.so.24.2 00:02:10.150 [20/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.409 [21/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.409 [22/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:10.409 [23/722] Linking target lib/librte_kvargs.so.24.2 00:02:10.409 [24/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:10.409 [25/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.409 [26/722] Linking target lib/librte_argparse.so.24.2 00:02:10.409 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.668 [28/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.668 [29/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.668 [30/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.668 [31/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.668 [32/722] Linking static target lib/librte_telemetry.a 00:02:10.668 [33/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.668 [34/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.926 [35/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.926 [36/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.926 [37/722] Linking target lib/librte_telemetry.so.24.2 00:02:11.186 [38/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.186 [39/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.186 [40/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:11.186 [41/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.186 [42/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.186 [43/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.186 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.186 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.186 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.186 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.186 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.445 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.704 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.704 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.963 [52/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.963 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.963 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.963 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.963 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.963 [57/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.222 [58/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.222 [59/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.222 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.481 [61/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.481 [62/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.481 [63/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.481 [64/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.481 [65/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.481 [66/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.481 [67/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.740 [68/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.740 [69/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.740 [70/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.998 [71/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.257 [72/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.257 [73/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.257 [74/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.257 [75/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.257 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.257 [77/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.257 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.257 [79/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.257 [80/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.515 [81/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.515 [82/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.774 [83/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.774 [84/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.774 [85/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.774 [86/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.033 [87/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.033 [88/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.033 [89/722] Linking static target lib/librte_ring.a 00:02:14.292 [90/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.292 [91/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.292 [92/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.292 [93/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.292 [94/722] Linking static target lib/librte_eal.a 00:02:14.551 [95/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.551 [96/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.551 [97/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.551 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.551 [99/722] Linking static target lib/librte_mempool.a 00:02:14.810 [100/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.810 [101/722] Linking static target lib/librte_rcu.a 00:02:14.810 [102/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.810 [103/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.068 [104/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.068 [105/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.068 [106/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.068 [107/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.326 [108/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.327 [109/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.327 [110/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.327 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.327 [112/722] Linking static target lib/librte_mbuf.a 00:02:15.584 [113/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.584 [114/722] Linking static target lib/librte_net.a 00:02:15.584 [115/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.584 [116/722] Linking static target lib/librte_meter.a 00:02:15.841 [117/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.841 [118/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.841 [119/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.842 [120/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.842 [121/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.099 [122/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.099 [123/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.665 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.665 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.924 [126/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.924 [127/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.924 [128/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.182 [129/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.182 [130/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.182 [131/722] Linking static target lib/librte_pci.a 00:02:17.182 [132/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.182 [133/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.441 [134/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.441 [135/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.441 [136/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.441 [137/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.441 [138/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.441 [139/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.441 [140/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.709 [141/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.709 [142/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.709 [143/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:17.709 [144/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.709 [145/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.709 [146/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.709 [147/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.709 [148/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:17.971 [149/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.971 [150/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.971 [151/722] Linking static target lib/librte_cmdline.a 00:02:17.971 [152/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.230 [153/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:18.230 [154/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:18.230 [155/722] Linking static target lib/librte_metrics.a 00:02:18.491 [156/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.491 [157/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.805 [158/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.805 [159/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.805 [160/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.068 [161/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.068 [162/722] Linking static target lib/librte_timer.a 00:02:19.329 [163/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.587 [164/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:19.587 [165/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:19.844 [166/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:19.844 [167/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.410 [168/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.410 [169/722] Linking static target lib/librte_ethdev.a 00:02:20.410 [170/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:20.410 [171/722] Linking static target lib/librte_bitratestats.a 00:02:20.410 [172/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:20.669 [173/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:20.669 [174/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:20.669 [175/722] Linking static target lib/librte_bbdev.a 00:02:20.669 [176/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.669 [177/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.669 [178/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.669 [179/722] Linking static target lib/librte_hash.a 00:02:20.669 [180/722] Linking target lib/librte_eal.so.24.2 00:02:20.927 [181/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:20.927 [182/722] Linking target lib/librte_ring.so.24.2 00:02:21.185 [183/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:21.185 [184/722] Linking target lib/librte_rcu.so.24.2 00:02:21.185 [185/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:21.185 [186/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:21.185 [187/722] Linking target lib/librte_mempool.so.24.2 00:02:21.185 [188/722] Linking target lib/librte_meter.so.24.2 00:02:21.185 [189/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:21.186 [190/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.443 [191/722] Linking target lib/librte_pci.so.24.2 00:02:21.443 [192/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:21.443 [193/722] Linking target lib/librte_timer.so.24.2 00:02:21.443 [194/722] Linking static target lib/acl/libavx2_tmp.a 00:02:21.443 [195/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:21.443 [196/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:21.443 [197/722] Linking target lib/librte_mbuf.so.24.2 00:02:21.443 [198/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.443 [199/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:21.443 [200/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:21.443 [201/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:21.443 [202/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:21.443 [203/722] Linking target lib/librte_net.so.24.2 00:02:21.701 [204/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:21.701 [205/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:21.701 [206/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:21.701 [207/722] Linking static target lib/acl/libavx512_tmp.a 00:02:21.701 [208/722] Linking target lib/librte_bbdev.so.24.2 00:02:21.701 [209/722] Linking target lib/librte_cmdline.so.24.2 00:02:21.701 [210/722] Linking target lib/librte_hash.so.24.2 00:02:21.701 [211/722] Linking static target lib/librte_acl.a 00:02:21.959 [212/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:21.959 [213/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:21.959 [214/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.959 [215/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:21.959 [216/722] Linking static target lib/librte_cfgfile.a 00:02:22.216 [217/722] Linking target lib/librte_acl.so.24.2 00:02:22.216 [218/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:22.216 [219/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:22.216 [220/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:22.474 [221/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:22.474 [222/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.474 [223/722] Linking target lib/librte_cfgfile.so.24.2 00:02:22.474 [224/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.732 [225/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:22.732 [226/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.990 [227/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:22.990 [228/722] Linking static target lib/librte_bpf.a 00:02:22.990 [229/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.990 [230/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.990 [231/722] Linking static target lib/librte_compressdev.a 00:02:23.247 [232/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.247 [233/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.247 [234/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:23.247 [235/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:23.505 [236/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.505 [237/722] Linking target lib/librte_compressdev.so.24.2 00:02:23.505 [238/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:23.505 [239/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:23.505 [240/722] Linking static target lib/librte_distributor.a 00:02:23.763 [241/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:23.763 [242/722] Linking static target lib/librte_dmadev.a 00:02:23.763 [243/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.763 [244/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:23.763 [245/722] Linking target lib/librte_distributor.so.24.2 00:02:24.329 [246/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.329 [247/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:24.329 [248/722] Linking target lib/librte_dmadev.so.24.2 00:02:24.329 [249/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:24.587 [250/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:24.846 [251/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:24.846 [252/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:24.846 [253/722] Linking static target lib/librte_efd.a 00:02:25.104 [254/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.104 [255/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.104 [256/722] Linking static target lib/librte_cryptodev.a 00:02:25.104 [257/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.104 [258/722] Linking target lib/librte_efd.so.24.2 00:02:25.362 [259/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:25.362 [260/722] Linking static target lib/librte_dispatcher.a 00:02:25.620 [261/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:25.620 [262/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.620 [263/722] Linking target lib/librte_ethdev.so.24.2 00:02:25.620 [264/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.620 [265/722] Linking static target lib/librte_gpudev.a 00:02:25.620 [266/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:25.620 [267/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:25.878 [268/722] Linking target lib/librte_metrics.so.24.2 00:02:25.878 [269/722] Linking target lib/librte_bpf.so.24.2 00:02:25.878 [270/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [271/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:25.878 [272/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:25.878 [273/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.878 [274/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:25.879 [275/722] Linking target lib/librte_bitratestats.so.24.2 00:02:26.442 [276/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.442 [277/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:26.442 [278/722] Linking target lib/librte_cryptodev.so.24.2 00:02:26.442 [279/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.442 [280/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:26.442 [281/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.699 [282/722] Linking target lib/librte_gpudev.so.24.2 00:02:26.699 [283/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:26.699 [284/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.699 [285/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:26.699 [286/722] Linking static target lib/librte_eventdev.a 00:02:26.699 [287/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.699 [288/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.957 [289/722] Linking static target lib/librte_gro.a 00:02:26.957 [290/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:26.957 [291/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:26.957 [292/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.957 [293/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:27.215 [294/722] Linking target lib/librte_gro.so.24.2 00:02:27.215 [295/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:27.215 [296/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:27.215 [297/722] Linking static target lib/librte_gso.a 00:02:27.473 [298/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.473 [299/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:27.473 [300/722] Linking target lib/librte_gso.so.24.2 00:02:27.473 [301/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:27.473 [302/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.730 [303/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:27.730 [304/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.730 [305/722] Linking static target lib/librte_jobstats.a 00:02:27.730 [306/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:27.987 [307/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:27.987 [308/722] Linking static target lib/librte_ip_frag.a 00:02:27.987 [309/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:27.987 [310/722] Linking static target lib/librte_latencystats.a 00:02:27.987 [311/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.987 [312/722] Linking target lib/librte_jobstats.so.24.2 00:02:28.244 [313/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.244 [314/722] Linking target lib/librte_latencystats.so.24.2 00:02:28.244 [315/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.244 [316/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:28.244 [317/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:28.244 [318/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:28.244 [319/722] Linking target lib/librte_ip_frag.so.24.2 00:02:28.244 [320/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:28.513 [321/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.513 [322/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:28.513 [323/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.513 [324/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.785 [325/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.785 [326/722] Linking target lib/librte_eventdev.so.24.2 00:02:29.042 [327/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:29.042 [328/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:29.043 [329/722] Linking static target lib/librte_lpm.a 00:02:29.043 [330/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:29.043 [331/722] Linking target lib/librte_dispatcher.so.24.2 00:02:29.043 [332/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.300 [333/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.300 [334/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:29.300 [335/722] Linking static target lib/librte_pcapng.a 00:02:29.300 [336/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.300 [337/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.300 [338/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:29.300 [339/722] Linking target lib/librte_lpm.so.24.2 00:02:29.300 [340/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:29.557 [341/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:29.557 [342/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.557 [343/722] Linking target lib/librte_pcapng.so.24.2 00:02:29.557 [344/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:29.557 [345/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.815 [346/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:29.815 [347/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.072 [348/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:30.072 [349/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.072 [350/722] Linking static target lib/librte_power.a 00:02:30.072 [351/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.072 [352/722] Linking static target lib/librte_member.a 00:02:30.072 [353/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:30.072 [354/722] Linking static target lib/librte_regexdev.a 00:02:30.072 [355/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:30.072 [356/722] Linking static target lib/librte_rawdev.a 00:02:30.330 [357/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:30.330 [358/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:30.330 [359/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:30.330 [360/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.588 [361/722] Linking target lib/librte_member.so.24.2 00:02:30.588 [362/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:30.588 [363/722] Linking static target lib/librte_mldev.a 00:02:30.588 [364/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:30.588 [365/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.588 [366/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.588 [367/722] Linking target lib/librte_rawdev.so.24.2 00:02:30.846 [368/722] Linking target lib/librte_power.so.24.2 00:02:30.846 [369/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:30.846 [370/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.104 [371/722] Linking target lib/librte_regexdev.so.24.2 00:02:31.104 [372/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:31.105 [373/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:31.105 [374/722] Linking static target lib/librte_rib.a 00:02:31.105 [375/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:31.105 [376/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.105 [377/722] Linking static target lib/librte_reorder.a 00:02:31.363 [378/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.363 [379/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:31.363 [380/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:31.621 [381/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.621 [382/722] Linking target lib/librte_reorder.so.24.2 00:02:31.621 [383/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:31.621 [384/722] Linking static target lib/librte_stack.a 00:02:31.621 [385/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.621 [386/722] Linking static target lib/librte_security.a 00:02:31.621 [387/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.621 [388/722] Linking target lib/librte_rib.so.24.2 00:02:31.621 [389/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:31.880 [390/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:31.880 [391/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.880 [392/722] Linking target lib/librte_stack.so.24.2 00:02:32.138 [393/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.138 [394/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.138 [395/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.138 [396/722] Linking target lib/librte_security.so.24.2 00:02:32.138 [397/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.138 [398/722] Linking target lib/librte_mldev.so.24.2 00:02:32.138 [399/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:32.396 [400/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.396 [401/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:32.396 [402/722] Linking static target lib/librte_sched.a 00:02:32.653 [403/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.910 [404/722] Linking target lib/librte_sched.so.24.2 00:02:32.910 [405/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.910 [406/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:32.910 [407/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.168 [408/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.168 [409/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:33.427 [410/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:33.685 [411/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.685 [412/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:33.685 [413/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:33.942 [414/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:33.942 [415/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:33.942 [416/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:34.199 [417/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:34.199 [418/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:34.457 [419/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:34.457 [420/722] Linking static target lib/librte_ipsec.a 00:02:34.457 [421/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:34.457 [422/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:34.714 [423/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:34.714 [424/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:34.714 [425/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:34.714 [426/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:34.714 [427/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:34.714 [428/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:34.714 [429/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.971 [430/722] Linking target lib/librte_ipsec.so.24.2 00:02:34.971 [431/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:35.537 [432/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:35.795 [433/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:35.795 [434/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:35.795 [435/722] Linking static target lib/librte_fib.a 00:02:35.795 [436/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:35.795 [437/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:35.795 [438/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:35.795 [439/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:35.795 [440/722] Linking static target lib/librte_pdcp.a 00:02:36.052 [441/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.052 [442/722] Linking target lib/librte_fib.so.24.2 00:02:36.052 [443/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.311 [444/722] Linking target lib/librte_pdcp.so.24.2 00:02:36.311 [445/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:36.877 [446/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:36.877 [447/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:36.877 [448/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:36.877 [449/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:36.877 [450/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:37.136 [451/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:37.394 [452/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:37.394 [453/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:37.394 [454/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:37.394 [455/722] Linking static target lib/librte_port.a 00:02:37.652 [456/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:37.652 [457/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:37.652 [458/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:37.910 [459/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:37.910 [460/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.910 [461/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:37.910 [462/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:38.167 [463/722] Linking target lib/librte_port.so.24.2 00:02:38.167 [464/722] Linking static target lib/librte_pdump.a 00:02:38.167 [465/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:38.167 [466/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:38.167 [467/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.425 [468/722] Linking target lib/librte_pdump.so.24.2 00:02:38.425 [469/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:38.682 [470/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.682 [471/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:38.960 [472/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:38.960 [473/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:38.960 [474/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:38.960 [475/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:38.960 [476/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:39.246 [477/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:39.504 [478/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:39.504 [479/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:39.504 [480/722] Linking static target lib/librte_table.a 00:02:39.504 [481/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:39.762 [482/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:40.328 [483/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.328 [484/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:40.328 [485/722] Linking target lib/librte_table.so.24.2 00:02:40.328 [486/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:40.587 [487/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:40.587 [488/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:40.587 [489/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:40.845 [490/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:40.845 [491/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:41.103 [492/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:41.103 [493/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:41.361 [494/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:41.362 [495/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:41.620 [496/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:41.620 [497/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:41.878 [498/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:41.878 [499/722] Linking static target lib/librte_graph.a 00:02:41.878 [500/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:41.878 [501/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:42.136 [502/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:42.394 [503/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.394 [504/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:42.653 [505/722] Linking target lib/librte_graph.so.24.2 00:02:42.653 [506/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:42.653 [507/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:42.653 [508/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:43.219 [509/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:43.219 [510/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:43.219 [511/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:43.219 [512/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:43.219 [513/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:43.219 [514/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.477 [515/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:43.735 [516/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:43.735 [517/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:43.993 [518/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:43.993 [519/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.993 [520/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:43.993 [521/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.251 [522/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:44.251 [523/722] Linking static target lib/librte_node.a 00:02:44.251 [524/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.510 [525/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.510 [526/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.510 [527/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.510 [528/722] Linking target lib/librte_node.so.24.2 00:02:44.510 [529/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.510 [530/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.769 [531/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.769 [532/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.769 [533/722] Linking static target drivers/librte_bus_vdev.a 00:02:44.769 [534/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.769 [535/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.769 [536/722] Linking static target drivers/librte_bus_pci.a 00:02:45.027 [537/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.027 [538/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.027 [539/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.027 [540/722] Linking target drivers/librte_bus_vdev.so.24.2 00:02:45.027 [541/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:45.286 [542/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:45.286 [543/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:45.286 [544/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:45.286 [545/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.286 [546/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.543 [547/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.543 [548/722] Linking target drivers/librte_bus_pci.so.24.2 00:02:45.543 [549/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.543 [550/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.543 [551/722] Linking static target drivers/librte_mempool_ring.a 00:02:45.543 [552/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.543 [553/722] Linking target drivers/librte_mempool_ring.so.24.2 00:02:45.543 [554/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:45.800 [555/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:46.057 [556/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:46.315 [557/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:46.572 [558/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:46.572 [559/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:47.137 [560/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:47.395 [561/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:47.396 [562/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:47.396 [563/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:47.653 [564/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:47.653 [565/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:47.911 [566/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:48.168 [567/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:48.168 [568/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:48.425 [569/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:48.425 [570/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:48.425 [571/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:48.425 [572/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:49.369 [573/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:49.369 [574/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:49.955 [575/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:49.955 [576/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:49.955 [577/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:49.955 [578/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:49.955 [579/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:49.955 [580/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:50.211 [581/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:50.211 [582/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.211 [583/722] Linking static target lib/librte_vhost.a 00:02:50.468 [584/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:50.726 [585/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:50.726 [586/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:50.726 [587/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:50.726 [588/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:50.726 [589/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:50.726 [590/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:50.726 [591/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:50.982 [592/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:50.982 [593/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:50.982 [594/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.239 [595/722] Linking static target drivers/librte_net_i40e.a 00:02:51.239 [596/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:51.239 [597/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.239 [598/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:51.496 [599/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:51.496 [600/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:51.496 [601/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:51.496 [602/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:51.496 [603/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.753 [604/722] Linking target lib/librte_vhost.so.24.2 00:02:51.753 [605/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:51.753 [606/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.011 [607/722] Linking target drivers/librte_net_i40e.so.24.2 00:02:52.011 [608/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:52.268 [609/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:52.526 [610/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:52.526 [611/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:52.784 [612/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:52.784 [613/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:52.784 [614/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:52.784 [615/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:53.042 [616/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:53.300 [617/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:53.300 [618/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:53.559 [619/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:53.559 [620/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:53.817 [621/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:53.817 [622/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:53.817 [623/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:53.817 [624/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:54.075 [625/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:54.075 [626/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:54.075 [627/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:54.334 [628/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:54.334 [629/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:54.592 [630/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:54.850 [631/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:54.850 [632/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:54.850 [633/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:55.109 [634/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:56.041 [635/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:56.041 [636/722] Linking static target lib/librte_pipeline.a 00:02:56.041 [637/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:56.041 [638/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:56.041 [639/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:56.041 [640/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:56.299 [641/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:56.299 [642/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:56.299 [643/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:56.557 [644/722] Linking target app/dpdk-dumpcap 00:02:56.557 [645/722] Linking target app/dpdk-graph 00:02:56.557 [646/722] Linking target app/dpdk-pdump 00:02:56.557 [647/722] Linking target app/dpdk-proc-info 00:02:56.816 [648/722] Linking target app/dpdk-test-acl 00:02:56.816 [649/722] Linking target app/dpdk-test-cmdline 00:02:56.816 [650/722] Linking target app/dpdk-test-compress-perf 00:02:57.074 [651/722] Linking target app/dpdk-test-crypto-perf 00:02:57.074 [652/722] Linking target app/dpdk-test-dma-perf 00:02:57.074 [653/722] Linking target app/dpdk-test-fib 00:02:57.074 [654/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:57.332 [655/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:57.332 [656/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:57.590 [657/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:57.590 [658/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:57.590 [659/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:57.848 [660/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:58.106 [661/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:58.106 [662/722] Linking target app/dpdk-test-gpudev 00:02:58.106 [663/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:58.106 [664/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:58.363 [665/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:58.363 [666/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:58.363 [667/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:58.621 [668/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:58.621 [669/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:58.621 [670/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:58.621 [671/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:58.879 [672/722] Linking target app/dpdk-test-eventdev 00:02:58.879 [673/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:58.879 [674/722] Linking target app/dpdk-test-flow-perf 00:02:58.879 [675/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.879 [676/722] Linking target lib/librte_pipeline.so.24.2 00:02:59.137 [677/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:59.137 [678/722] Linking target app/dpdk-test-bbdev 00:02:59.396 [679/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:59.396 [680/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:59.396 [681/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:59.396 [682/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:59.396 [683/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:59.654 [684/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:59.921 [685/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:59.921 [686/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:00.196 [687/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:00.196 [688/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:00.196 [689/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:00.454 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:00.454 [691/722] Linking target app/dpdk-test-pipeline 00:03:00.712 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:00.712 [693/722] Linking target app/dpdk-test-mldev 00:03:00.970 [694/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:01.228 [695/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:01.228 [696/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:01.228 [697/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:01.487 [698/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:01.487 [699/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:01.745 [700/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:01.745 [701/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:02.004 [702/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:02.004 [703/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:02.004 [704/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:02.262 [705/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:02.520 [706/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:02.778 [707/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:03.034 [708/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:03.034 [709/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:03.291 [710/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:03.291 [711/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:03.291 [712/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:03.548 [713/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:03.548 [714/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:03.548 [715/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:03.804 [716/722] Linking target app/dpdk-test-sad 00:03:03.804 [717/722] Linking target app/dpdk-test-regex 00:03:03.804 [718/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:03.804 [719/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:04.061 [720/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:04.319 [721/722] Linking target app/dpdk-testpmd 00:03:04.577 [722/722] Linking target app/dpdk-test-security-perf 00:03:04.577 02:49:50 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:04.834 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:04.834 [0/1] Installing files. 00:03:05.095 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.095 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.096 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:05.097 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:05.098 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.099 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:05.100 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:05.100 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.100 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.101 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:05.361 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:05.361 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:05.361 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.361 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:05.361 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.361 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.362 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.363 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.623 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.623 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.624 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:05.625 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:05.625 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:05.625 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:05.625 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:05.625 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:05.625 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:05.625 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:05.625 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:05.625 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:05.625 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:05.625 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:05.625 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:05.625 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:05.625 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:05.625 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:05.625 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:05.625 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:05.625 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:05.625 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:05.625 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:05.625 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:05.625 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:05.625 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:05.625 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:05.625 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:05.625 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:05.625 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:05.625 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:05.625 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:05.625 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:05.625 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:05.625 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:05.625 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:05.625 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:05.625 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:05.625 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:05.625 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:05.625 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:05.625 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:05.625 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:05.625 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:05.625 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:05.625 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:05.625 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:05.625 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:05.625 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:05.625 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:05.625 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:05.625 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:05.625 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:05.625 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:05.625 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:05.625 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:05.625 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:05.625 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:05.625 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:05.625 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:05.625 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:05.625 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:05.625 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:05.625 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:05.625 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:05.625 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:05.625 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:05.625 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:05.625 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:05.625 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:05.625 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:05.625 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:05.625 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:05.625 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:05.625 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:05.625 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:05.625 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:05.625 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:05.625 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:05.625 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:05.625 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:05.625 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:05.625 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:05.625 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:05.625 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:05.625 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:05.625 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:05.625 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:05.625 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:05.625 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:05.625 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:05.626 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:05.626 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:05.626 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:05.626 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:05.626 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:05.626 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:05.626 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:05.626 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:05.626 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:05.626 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:05.626 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:05.626 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:05.626 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:05.626 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:05.626 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:05.626 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:05.626 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:05.626 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:05.626 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:05.626 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:05.626 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:05.626 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:05.626 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:05.626 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:05.626 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:05.626 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:05.626 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:05.626 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:05.626 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:05.626 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:05.626 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:05.626 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:05.626 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:05.626 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:05.626 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:05.626 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:05.626 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:05.626 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:05.626 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:05.626 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:05.626 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:05.626 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:05.626 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:05.626 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:05.626 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:05.626 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:05.626 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:05.626 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:05.626 02:49:51 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:05.626 02:49:51 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:05.626 02:49:51 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:05.626 02:49:51 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:05.626 00:03:05.626 real 1m3.259s 00:03:05.626 user 7m53.092s 00:03:05.626 sys 1m6.401s 00:03:05.626 ************************************ 00:03:05.626 END TEST build_native_dpdk 00:03:05.626 ************************************ 00:03:05.626 02:49:51 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:05.626 02:49:51 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:05.626 02:49:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:05.626 02:49:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:05.626 02:49:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:05.626 02:49:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:05.626 02:49:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:05.626 02:49:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:05.626 02:49:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:05.626 02:49:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:05.885 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:05.885 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.885 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:05.885 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:06.143 Using 'verbs' RDMA provider 00:03:19.279 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:34.159 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:34.159 Creating mk/config.mk...done. 00:03:34.159 Creating mk/cc.flags.mk...done. 00:03:34.159 Type 'make' to build. 00:03:34.159 02:50:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:34.159 02:50:18 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:34.159 02:50:18 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:34.160 02:50:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:34.160 ************************************ 00:03:34.160 START TEST make 00:03:34.160 ************************************ 00:03:34.160 02:50:18 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:34.160 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:34.160 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:34.160 meson setup builddir \ 00:03:34.160 -Dwith-libaio=enabled \ 00:03:34.160 -Dwith-liburing=enabled \ 00:03:34.160 -Dwith-libvfn=disabled \ 00:03:34.160 -Dwith-spdk=false && \ 00:03:34.160 meson compile -C builddir && \ 00:03:34.160 cd -) 00:03:34.160 make[1]: Nothing to be done for 'all'. 00:03:35.533 The Meson build system 00:03:35.533 Version: 1.3.1 00:03:35.533 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:35.533 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.533 Build type: native build 00:03:35.533 Project name: xnvme 00:03:35.533 Project version: 0.7.3 00:03:35.533 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:35.533 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:35.533 Host machine cpu family: x86_64 00:03:35.533 Host machine cpu: x86_64 00:03:35.533 Message: host_machine.system: linux 00:03:35.533 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:35.533 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:35.533 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:35.533 Run-time dependency threads found: YES 00:03:35.533 Has header "setupapi.h" : NO 00:03:35.533 Has header "linux/blkzoned.h" : YES 00:03:35.533 Has header "linux/blkzoned.h" : YES (cached) 00:03:35.533 Has header "libaio.h" : YES 00:03:35.533 Library aio found: YES 00:03:35.533 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:35.533 Run-time dependency liburing found: YES 2.2 00:03:35.533 Dependency libvfn skipped: feature with-libvfn disabled 00:03:35.533 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.533 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.533 Configuring xnvme_config.h using configuration 00:03:35.533 Configuring xnvme.spec using configuration 00:03:35.533 Run-time dependency bash-completion found: YES 2.11 00:03:35.533 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:35.533 Program cp found: YES (/usr/bin/cp) 00:03:35.533 Has header "winsock2.h" : NO 00:03:35.533 Has header "dbghelp.h" : NO 00:03:35.533 Library rpcrt4 found: NO 00:03:35.533 Library rt found: YES 00:03:35.533 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:35.533 Found CMake: /usr/bin/cmake (3.27.7) 00:03:35.533 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:35.533 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:35.533 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:35.533 Build targets in project: 32 00:03:35.533 00:03:35.533 xnvme 0.7.3 00:03:35.533 00:03:35.533 User defined options 00:03:35.533 with-libaio : enabled 00:03:35.533 with-liburing: enabled 00:03:35.533 with-libvfn : disabled 00:03:35.533 with-spdk : false 00:03:35.533 00:03:35.533 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.791 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:35.791 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:35.791 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:35.791 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:36.049 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:36.049 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.049 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.049 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:36.049 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:36.049 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:36.049 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.049 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:36.049 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:36.049 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:36.049 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:36.049 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:36.049 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.049 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:36.049 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:36.049 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:36.049 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:36.049 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:36.307 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:36.307 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:36.307 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:36.307 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:36.307 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:36.307 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:36.307 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:36.307 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:36.307 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:36.307 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:36.307 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:36.307 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:36.307 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:36.307 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:36.307 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:36.307 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:36.307 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:36.307 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:36.307 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:36.307 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:36.307 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:36.307 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:36.307 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:36.307 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:36.307 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:36.307 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:36.307 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:36.307 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:36.307 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:36.307 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:36.307 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:36.307 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:36.565 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:36.565 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:36.565 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:36.565 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:36.565 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:36.565 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:36.565 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:36.565 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:36.565 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:36.565 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:36.565 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:36.565 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:36.565 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:36.565 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:36.565 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:36.823 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.823 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:36.823 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:36.823 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:36.823 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:36.823 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.823 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:36.823 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:36.823 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:36.823 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:36.823 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:36.823 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.823 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:36.823 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:36.823 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:36.823 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.823 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:37.081 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:37.081 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:37.081 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:37.081 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:37.081 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:37.081 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:37.081 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:37.081 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:37.081 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:37.081 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:37.081 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:37.081 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:37.081 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:37.081 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:37.081 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:37.081 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:37.081 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:37.081 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:37.081 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:37.081 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:37.081 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:37.081 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:37.081 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:37.339 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:37.339 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:37.339 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:37.339 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:37.339 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:37.339 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:37.339 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:37.339 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:37.339 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:37.339 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:37.339 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:37.339 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:37.339 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:37.339 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:37.339 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:37.339 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:37.339 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:37.339 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:37.339 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:37.339 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:37.339 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:37.339 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:37.339 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:37.597 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:37.597 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:37.597 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:37.597 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:37.597 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:37.597 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:37.597 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:37.597 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:37.597 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:37.597 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:37.597 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:37.597 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:37.597 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:37.597 [145/203] Linking target lib/libxnvme.so 00:03:37.854 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:37.854 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:37.854 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:37.854 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:37.854 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:37.854 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:37.854 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:37.854 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:37.854 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:37.854 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:37.854 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:37.854 [157/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:37.854 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:37.854 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:37.854 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:38.112 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:38.112 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:38.112 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:38.112 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:38.112 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:38.112 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:38.112 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:38.112 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:38.112 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:38.112 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:38.369 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:38.369 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:38.369 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:38.369 [174/203] Linking static target lib/libxnvme.a 00:03:38.369 [175/203] Linking target tests/xnvme_tests_znd_state 00:03:38.369 [176/203] Linking target tests/xnvme_tests_lblk 00:03:38.369 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:38.369 [178/203] Linking target tests/xnvme_tests_buf 00:03:38.369 [179/203] Linking target tests/xnvme_tests_cli 00:03:38.628 [180/203] Linking target tests/xnvme_tests_enum 00:03:38.628 [181/203] Linking target tests/xnvme_tests_scc 00:03:38.628 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:38.628 [183/203] Linking target tests/xnvme_tests_znd_append 00:03:38.628 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:38.628 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:03:38.628 [186/203] Linking target tests/xnvme_tests_ioworker 00:03:38.628 [187/203] Linking target tests/xnvme_tests_kvs 00:03:38.628 [188/203] Linking target tools/xnvme 00:03:38.628 [189/203] Linking target tests/xnvme_tests_map 00:03:38.628 [190/203] Linking target tools/xdd 00:03:38.628 [191/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:38.628 [192/203] Linking target tools/lblk 00:03:38.628 [193/203] Linking target tools/xnvme_file 00:03:38.628 [194/203] Linking target tools/kvs 00:03:38.628 [195/203] Linking target examples/xnvme_io_async 00:03:38.628 [196/203] Linking target examples/xnvme_hello 00:03:38.628 [197/203] Linking target examples/xnvme_dev 00:03:38.628 [198/203] Linking target tools/zoned 00:03:38.628 [199/203] Linking target examples/xnvme_enum 00:03:38.628 [200/203] Linking target examples/xnvme_single_sync 00:03:38.628 [201/203] Linking target examples/xnvme_single_async 00:03:38.628 [202/203] Linking target examples/zoned_io_async 00:03:38.628 [203/203] Linking target examples/zoned_io_sync 00:03:38.628 INFO: autodetecting backend as ninja 00:03:38.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:38.628 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:00.589 CC lib/ut/ut.o 00:04:00.589 CC lib/log/log.o 00:04:00.589 CC lib/log/log_flags.o 00:04:00.589 CC lib/log/log_deprecated.o 00:04:00.589 CC lib/ut_mock/mock.o 00:04:00.589 LIB libspdk_ut_mock.a 00:04:00.589 LIB libspdk_ut.a 00:04:00.589 SO libspdk_ut.so.2.0 00:04:00.589 SO libspdk_ut_mock.so.6.0 00:04:00.589 LIB libspdk_log.a 00:04:00.589 SO libspdk_log.so.7.0 00:04:00.589 SYMLINK libspdk_ut.so 00:04:00.589 SYMLINK libspdk_ut_mock.so 00:04:00.589 SYMLINK libspdk_log.so 00:04:00.589 CC lib/ioat/ioat.o 00:04:00.589 CXX lib/trace_parser/trace.o 00:04:00.589 CC lib/util/base64.o 00:04:00.589 CC lib/util/bit_array.o 00:04:00.589 CC lib/util/crc16.o 00:04:00.589 CC lib/util/cpuset.o 00:04:00.589 CC lib/util/crc32.o 00:04:00.589 CC lib/util/crc32c.o 00:04:00.589 CC lib/dma/dma.o 00:04:00.589 CC lib/vfio_user/host/vfio_user_pci.o 00:04:00.589 CC lib/util/crc32_ieee.o 00:04:00.589 CC lib/util/crc64.o 00:04:00.589 CC lib/util/dif.o 00:04:00.589 CC lib/util/fd.o 00:04:00.589 CC lib/vfio_user/host/vfio_user.o 00:04:00.589 CC lib/util/file.o 00:04:00.589 LIB libspdk_dma.a 00:04:00.589 SO libspdk_dma.so.4.0 00:04:00.589 CC lib/util/hexlify.o 00:04:00.589 CC lib/util/iov.o 00:04:00.589 SYMLINK libspdk_dma.so 00:04:00.589 CC lib/util/math.o 00:04:00.589 CC lib/util/pipe.o 00:04:00.589 LIB libspdk_ioat.a 00:04:00.589 CC lib/util/strerror_tls.o 00:04:00.589 SO libspdk_ioat.so.7.0 00:04:00.589 LIB libspdk_vfio_user.a 00:04:00.589 SYMLINK libspdk_ioat.so 00:04:00.589 CC lib/util/string.o 00:04:00.589 CC lib/util/uuid.o 00:04:00.589 CC lib/util/fd_group.o 00:04:00.589 SO libspdk_vfio_user.so.5.0 00:04:00.589 CC lib/util/xor.o 00:04:00.589 CC lib/util/zipf.o 00:04:00.589 SYMLINK libspdk_vfio_user.so 00:04:00.589 LIB libspdk_util.a 00:04:00.589 SO libspdk_util.so.9.0 00:04:00.589 LIB libspdk_trace_parser.a 00:04:00.589 SO libspdk_trace_parser.so.5.0 00:04:00.589 SYMLINK libspdk_util.so 00:04:00.589 SYMLINK libspdk_trace_parser.so 00:04:00.589 CC lib/idxd/idxd.o 00:04:00.589 CC lib/rdma/common.o 00:04:00.589 CC lib/idxd/idxd_user.o 00:04:00.589 CC lib/rdma/rdma_verbs.o 00:04:00.589 CC lib/vmd/vmd.o 00:04:00.589 CC lib/vmd/led.o 00:04:00.589 CC lib/conf/conf.o 00:04:00.589 CC lib/json/json_parse.o 00:04:00.589 CC lib/json/json_util.o 00:04:00.589 CC lib/env_dpdk/env.o 00:04:00.589 CC lib/env_dpdk/memory.o 00:04:00.589 CC lib/env_dpdk/pci.o 00:04:00.589 LIB libspdk_conf.a 00:04:00.589 CC lib/json/json_write.o 00:04:00.589 CC lib/env_dpdk/init.o 00:04:00.589 CC lib/env_dpdk/threads.o 00:04:00.589 SO libspdk_conf.so.6.0 00:04:00.589 LIB libspdk_rdma.a 00:04:00.589 SO libspdk_rdma.so.6.0 00:04:00.589 SYMLINK libspdk_conf.so 00:04:00.589 CC lib/env_dpdk/pci_ioat.o 00:04:00.589 SYMLINK libspdk_rdma.so 00:04:00.589 CC lib/env_dpdk/pci_virtio.o 00:04:00.589 CC lib/env_dpdk/pci_vmd.o 00:04:00.589 CC lib/env_dpdk/pci_idxd.o 00:04:00.589 CC lib/env_dpdk/pci_event.o 00:04:00.589 CC lib/env_dpdk/sigbus_handler.o 00:04:00.589 LIB libspdk_json.a 00:04:00.589 SO libspdk_json.so.6.0 00:04:00.589 CC lib/env_dpdk/pci_dpdk.o 00:04:00.589 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.589 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.589 LIB libspdk_idxd.a 00:04:00.589 SYMLINK libspdk_json.so 00:04:00.589 SO libspdk_idxd.so.12.0 00:04:00.589 LIB libspdk_vmd.a 00:04:00.589 SYMLINK libspdk_idxd.so 00:04:00.848 SO libspdk_vmd.so.6.0 00:04:00.848 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.848 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.848 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.848 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.848 SYMLINK libspdk_vmd.so 00:04:01.107 LIB libspdk_jsonrpc.a 00:04:01.107 SO libspdk_jsonrpc.so.6.0 00:04:01.366 SYMLINK libspdk_jsonrpc.so 00:04:01.626 CC lib/rpc/rpc.o 00:04:01.626 LIB libspdk_env_dpdk.a 00:04:01.885 LIB libspdk_rpc.a 00:04:01.885 SO libspdk_rpc.so.6.0 00:04:01.885 SO libspdk_env_dpdk.so.14.0 00:04:01.885 SYMLINK libspdk_rpc.so 00:04:02.144 SYMLINK libspdk_env_dpdk.so 00:04:02.144 CC lib/notify/notify.o 00:04:02.144 CC lib/notify/notify_rpc.o 00:04:02.144 CC lib/keyring/keyring.o 00:04:02.144 CC lib/keyring/keyring_rpc.o 00:04:02.144 CC lib/trace/trace.o 00:04:02.144 CC lib/trace/trace_flags.o 00:04:02.144 CC lib/trace/trace_rpc.o 00:04:02.404 LIB libspdk_notify.a 00:04:02.404 SO libspdk_notify.so.6.0 00:04:02.404 LIB libspdk_keyring.a 00:04:02.404 SYMLINK libspdk_notify.so 00:04:02.404 SO libspdk_keyring.so.1.0 00:04:02.404 LIB libspdk_trace.a 00:04:02.404 SO libspdk_trace.so.10.0 00:04:02.663 SYMLINK libspdk_keyring.so 00:04:02.663 SYMLINK libspdk_trace.so 00:04:02.922 CC lib/thread/thread.o 00:04:02.922 CC lib/sock/sock.o 00:04:02.922 CC lib/thread/iobuf.o 00:04:02.922 CC lib/sock/sock_rpc.o 00:04:03.491 LIB libspdk_sock.a 00:04:03.491 SO libspdk_sock.so.9.0 00:04:03.491 SYMLINK libspdk_sock.so 00:04:03.751 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:03.751 CC lib/nvme/nvme_ctrlr.o 00:04:03.751 CC lib/nvme/nvme_fabric.o 00:04:03.751 CC lib/nvme/nvme_ns_cmd.o 00:04:03.751 CC lib/nvme/nvme_pcie_common.o 00:04:03.751 CC lib/nvme/nvme_ns.o 00:04:03.751 CC lib/nvme/nvme_pcie.o 00:04:03.751 CC lib/nvme/nvme_qpair.o 00:04:03.751 CC lib/nvme/nvme.o 00:04:04.689 CC lib/nvme/nvme_quirks.o 00:04:04.689 CC lib/nvme/nvme_transport.o 00:04:04.689 CC lib/nvme/nvme_discovery.o 00:04:04.948 LIB libspdk_thread.a 00:04:04.948 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.948 SO libspdk_thread.so.10.0 00:04:04.948 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.948 CC lib/nvme/nvme_tcp.o 00:04:04.948 SYMLINK libspdk_thread.so 00:04:04.948 CC lib/nvme/nvme_opal.o 00:04:04.948 CC lib/nvme/nvme_io_msg.o 00:04:05.207 CC lib/nvme/nvme_poll_group.o 00:04:05.207 CC lib/nvme/nvme_zns.o 00:04:05.465 CC lib/nvme/nvme_stubs.o 00:04:05.465 CC lib/nvme/nvme_auth.o 00:04:05.465 CC lib/nvme/nvme_cuse.o 00:04:05.465 CC lib/nvme/nvme_rdma.o 00:04:05.725 CC lib/accel/accel.o 00:04:05.725 CC lib/blob/blobstore.o 00:04:05.725 CC lib/accel/accel_rpc.o 00:04:05.984 CC lib/accel/accel_sw.o 00:04:06.243 CC lib/init/json_config.o 00:04:06.243 CC lib/virtio/virtio.o 00:04:06.243 CC lib/virtio/virtio_vhost_user.o 00:04:06.502 CC lib/init/subsystem.o 00:04:06.502 CC lib/blob/request.o 00:04:06.502 CC lib/init/subsystem_rpc.o 00:04:06.502 CC lib/virtio/virtio_vfio_user.o 00:04:06.502 CC lib/init/rpc.o 00:04:06.761 CC lib/blob/zeroes.o 00:04:06.761 CC lib/virtio/virtio_pci.o 00:04:06.761 LIB libspdk_init.a 00:04:06.761 CC lib/blob/blob_bs_dev.o 00:04:06.761 SO libspdk_init.so.5.0 00:04:07.020 SYMLINK libspdk_init.so 00:04:07.020 LIB libspdk_accel.a 00:04:07.020 LIB libspdk_virtio.a 00:04:07.020 SO libspdk_accel.so.15.0 00:04:07.280 CC lib/event/app.o 00:04:07.280 CC lib/event/log_rpc.o 00:04:07.280 CC lib/event/reactor.o 00:04:07.280 CC lib/event/scheduler_static.o 00:04:07.280 CC lib/event/app_rpc.o 00:04:07.280 SO libspdk_virtio.so.7.0 00:04:07.280 SYMLINK libspdk_accel.so 00:04:07.280 SYMLINK libspdk_virtio.so 00:04:07.280 LIB libspdk_nvme.a 00:04:07.280 CC lib/bdev/bdev.o 00:04:07.280 CC lib/bdev/bdev_rpc.o 00:04:07.538 CC lib/bdev/bdev_zone.o 00:04:07.538 CC lib/bdev/scsi_nvme.o 00:04:07.538 CC lib/bdev/part.o 00:04:07.538 SO libspdk_nvme.so.13.0 00:04:07.795 LIB libspdk_event.a 00:04:07.795 SO libspdk_event.so.13.0 00:04:07.795 SYMLINK libspdk_nvme.so 00:04:08.054 SYMLINK libspdk_event.so 00:04:09.958 LIB libspdk_blob.a 00:04:09.958 SO libspdk_blob.so.11.0 00:04:10.218 SYMLINK libspdk_blob.so 00:04:10.477 CC lib/lvol/lvol.o 00:04:10.477 CC lib/blobfs/tree.o 00:04:10.477 CC lib/blobfs/blobfs.o 00:04:11.045 LIB libspdk_bdev.a 00:04:11.045 SO libspdk_bdev.so.15.0 00:04:11.045 SYMLINK libspdk_bdev.so 00:04:11.303 CC lib/nbd/nbd.o 00:04:11.303 CC lib/nbd/nbd_rpc.o 00:04:11.303 CC lib/ublk/ublk.o 00:04:11.303 CC lib/ublk/ublk_rpc.o 00:04:11.303 CC lib/ftl/ftl_core.o 00:04:11.303 CC lib/ftl/ftl_init.o 00:04:11.303 CC lib/scsi/dev.o 00:04:11.303 CC lib/nvmf/ctrlr.o 00:04:11.561 LIB libspdk_blobfs.a 00:04:11.561 CC lib/scsi/lun.o 00:04:11.561 SO libspdk_blobfs.so.10.0 00:04:11.561 CC lib/scsi/port.o 00:04:11.561 CC lib/ftl/ftl_layout.o 00:04:11.561 CC lib/scsi/scsi.o 00:04:11.561 LIB libspdk_lvol.a 00:04:11.561 SYMLINK libspdk_blobfs.so 00:04:11.561 CC lib/scsi/scsi_bdev.o 00:04:11.561 SO libspdk_lvol.so.10.0 00:04:11.843 SYMLINK libspdk_lvol.so 00:04:11.843 CC lib/scsi/scsi_pr.o 00:04:11.843 CC lib/nvmf/ctrlr_discovery.o 00:04:11.843 CC lib/nvmf/ctrlr_bdev.o 00:04:11.843 CC lib/ftl/ftl_debug.o 00:04:11.843 LIB libspdk_nbd.a 00:04:11.843 SO libspdk_nbd.so.7.0 00:04:11.843 CC lib/scsi/scsi_rpc.o 00:04:11.843 SYMLINK libspdk_nbd.so 00:04:11.843 CC lib/nvmf/subsystem.o 00:04:12.180 CC lib/ftl/ftl_io.o 00:04:12.180 CC lib/ftl/ftl_sb.o 00:04:12.180 CC lib/ftl/ftl_l2p.o 00:04:12.180 LIB libspdk_ublk.a 00:04:12.180 CC lib/scsi/task.o 00:04:12.180 SO libspdk_ublk.so.3.0 00:04:12.180 SYMLINK libspdk_ublk.so 00:04:12.180 CC lib/ftl/ftl_l2p_flat.o 00:04:12.180 CC lib/nvmf/nvmf.o 00:04:12.180 CC lib/ftl/ftl_nv_cache.o 00:04:12.438 CC lib/ftl/ftl_band.o 00:04:12.438 CC lib/ftl/ftl_band_ops.o 00:04:12.438 CC lib/nvmf/nvmf_rpc.o 00:04:12.438 LIB libspdk_scsi.a 00:04:12.438 CC lib/ftl/ftl_writer.o 00:04:12.438 SO libspdk_scsi.so.9.0 00:04:12.696 SYMLINK libspdk_scsi.so 00:04:12.696 CC lib/nvmf/transport.o 00:04:12.696 CC lib/nvmf/tcp.o 00:04:12.955 CC lib/ftl/ftl_rq.o 00:04:12.955 CC lib/ftl/ftl_reloc.o 00:04:12.955 CC lib/ftl/ftl_l2p_cache.o 00:04:12.955 CC lib/ftl/ftl_p2l.o 00:04:13.212 CC lib/ftl/mngt/ftl_mngt.o 00:04:13.470 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:13.470 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:13.470 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:13.470 CC lib/nvmf/stubs.o 00:04:13.470 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:13.470 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:13.470 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:13.738 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:13.738 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:13.738 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:13.738 CC lib/nvmf/rdma.o 00:04:13.738 CC lib/nvmf/auth.o 00:04:13.738 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:13.738 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:13.738 CC lib/iscsi/conn.o 00:04:13.738 CC lib/iscsi/init_grp.o 00:04:13.738 CC lib/iscsi/iscsi.o 00:04:14.001 CC lib/iscsi/md5.o 00:04:14.001 CC lib/iscsi/param.o 00:04:14.001 CC lib/iscsi/portal_grp.o 00:04:14.259 CC lib/iscsi/tgt_node.o 00:04:14.259 CC lib/iscsi/iscsi_subsystem.o 00:04:14.259 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:14.259 CC lib/iscsi/iscsi_rpc.o 00:04:14.517 CC lib/ftl/utils/ftl_conf.o 00:04:14.517 CC lib/iscsi/task.o 00:04:14.517 CC lib/ftl/utils/ftl_md.o 00:04:14.517 CC lib/ftl/utils/ftl_mempool.o 00:04:14.517 CC lib/ftl/utils/ftl_bitmap.o 00:04:14.776 CC lib/ftl/utils/ftl_property.o 00:04:14.776 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:14.776 CC lib/vhost/vhost.o 00:04:14.776 CC lib/vhost/vhost_rpc.o 00:04:14.776 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:14.776 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:14.776 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.034 CC lib/vhost/vhost_scsi.o 00:04:15.034 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.034 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.034 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:15.034 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.034 CC lib/vhost/vhost_blk.o 00:04:15.292 CC lib/vhost/rte_vhost_user.o 00:04:15.293 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.293 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.293 CC lib/ftl/base/ftl_base_dev.o 00:04:15.552 CC lib/ftl/base/ftl_base_bdev.o 00:04:15.552 CC lib/ftl/ftl_trace.o 00:04:15.552 LIB libspdk_iscsi.a 00:04:15.810 SO libspdk_iscsi.so.8.0 00:04:15.810 LIB libspdk_ftl.a 00:04:16.068 SYMLINK libspdk_iscsi.so 00:04:16.068 SO libspdk_ftl.so.9.0 00:04:16.327 LIB libspdk_vhost.a 00:04:16.327 SYMLINK libspdk_ftl.so 00:04:16.587 LIB libspdk_nvmf.a 00:04:16.587 SO libspdk_vhost.so.8.0 00:04:16.587 SO libspdk_nvmf.so.18.0 00:04:16.587 SYMLINK libspdk_vhost.so 00:04:16.846 SYMLINK libspdk_nvmf.so 00:04:17.413 CC module/env_dpdk/env_dpdk_rpc.o 00:04:17.413 CC module/accel/iaa/accel_iaa.o 00:04:17.413 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:17.413 CC module/sock/posix/posix.o 00:04:17.413 CC module/blob/bdev/blob_bdev.o 00:04:17.413 CC module/accel/error/accel_error.o 00:04:17.413 CC module/accel/dsa/accel_dsa.o 00:04:17.413 CC module/keyring/file/keyring.o 00:04:17.413 CC module/accel/ioat/accel_ioat.o 00:04:17.413 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:17.413 LIB libspdk_env_dpdk_rpc.a 00:04:17.413 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.413 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.671 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.671 CC module/keyring/file/keyring_rpc.o 00:04:17.671 CC module/accel/error/accel_error_rpc.o 00:04:17.671 CC module/accel/iaa/accel_iaa_rpc.o 00:04:17.671 LIB libspdk_scheduler_dpdk_governor.a 00:04:17.671 LIB libspdk_scheduler_dynamic.a 00:04:17.671 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:17.671 SO libspdk_scheduler_dynamic.so.4.0 00:04:17.671 LIB libspdk_accel_ioat.a 00:04:17.671 LIB libspdk_blob_bdev.a 00:04:17.671 CC module/accel/dsa/accel_dsa_rpc.o 00:04:17.671 SO libspdk_accel_ioat.so.6.0 00:04:17.671 SO libspdk_blob_bdev.so.11.0 00:04:17.671 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:17.671 LIB libspdk_accel_error.a 00:04:17.671 SYMLINK libspdk_scheduler_dynamic.so 00:04:17.671 LIB libspdk_keyring_file.a 00:04:17.671 SYMLINK libspdk_blob_bdev.so 00:04:17.929 SO libspdk_accel_error.so.2.0 00:04:17.929 SYMLINK libspdk_accel_ioat.so 00:04:17.929 SO libspdk_keyring_file.so.1.0 00:04:17.929 CC module/scheduler/gscheduler/gscheduler.o 00:04:17.929 LIB libspdk_accel_iaa.a 00:04:17.929 SYMLINK libspdk_accel_error.so 00:04:17.929 SYMLINK libspdk_keyring_file.so 00:04:17.929 LIB libspdk_accel_dsa.a 00:04:17.929 SO libspdk_accel_iaa.so.3.0 00:04:17.929 SO libspdk_accel_dsa.so.5.0 00:04:17.929 SYMLINK libspdk_accel_iaa.so 00:04:17.929 SYMLINK libspdk_accel_dsa.so 00:04:18.187 LIB libspdk_scheduler_gscheduler.a 00:04:18.187 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.187 CC module/bdev/error/vbdev_error.o 00:04:18.187 CC module/bdev/gpt/gpt.o 00:04:18.187 CC module/bdev/malloc/bdev_malloc.o 00:04:18.187 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.187 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.187 CC module/bdev/delay/vbdev_delay.o 00:04:18.187 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.187 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.187 CC module/bdev/nvme/bdev_nvme.o 00:04:18.187 CC module/bdev/null/bdev_null.o 00:04:18.445 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.445 LIB libspdk_sock_posix.a 00:04:18.445 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.445 CC module/bdev/null/bdev_null_rpc.o 00:04:18.445 SO libspdk_sock_posix.so.6.0 00:04:18.445 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.445 SYMLINK libspdk_sock_posix.so 00:04:18.445 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.445 LIB libspdk_blobfs_bdev.a 00:04:18.445 CC module/bdev/nvme/nvme_rpc.o 00:04:18.703 LIB libspdk_bdev_delay.a 00:04:18.703 SO libspdk_blobfs_bdev.so.6.0 00:04:18.703 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.703 LIB libspdk_bdev_null.a 00:04:18.703 SO libspdk_bdev_delay.so.6.0 00:04:18.703 SO libspdk_bdev_null.so.6.0 00:04:18.703 LIB libspdk_bdev_error.a 00:04:18.703 SYMLINK libspdk_blobfs_bdev.so 00:04:18.703 SO libspdk_bdev_error.so.6.0 00:04:18.703 SYMLINK libspdk_bdev_delay.so 00:04:18.703 SYMLINK libspdk_bdev_null.so 00:04:18.703 LIB libspdk_bdev_gpt.a 00:04:18.703 CC module/bdev/nvme/bdev_mdns_client.o 00:04:18.703 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.703 SYMLINK libspdk_bdev_error.so 00:04:18.703 SO libspdk_bdev_gpt.so.6.0 00:04:18.703 LIB libspdk_bdev_malloc.a 00:04:18.703 SO libspdk_bdev_malloc.so.6.0 00:04:18.703 SYMLINK libspdk_bdev_gpt.so 00:04:18.962 CC module/bdev/nvme/vbdev_opal.o 00:04:18.962 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:18.962 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.962 SYMLINK libspdk_bdev_malloc.so 00:04:18.962 CC module/bdev/raid/bdev_raid.o 00:04:18.962 CC module/bdev/split/vbdev_split.o 00:04:18.962 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.962 CC module/bdev/xnvme/bdev_xnvme.o 00:04:18.962 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:19.220 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.220 LIB libspdk_bdev_lvol.a 00:04:19.220 SO libspdk_bdev_lvol.so.6.0 00:04:19.220 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:19.220 CC module/bdev/raid/bdev_raid_rpc.o 00:04:19.220 SYMLINK libspdk_bdev_lvol.so 00:04:19.220 CC module/bdev/split/vbdev_split_rpc.o 00:04:19.478 LIB libspdk_bdev_xnvme.a 00:04:19.478 SO libspdk_bdev_xnvme.so.3.0 00:04:19.478 LIB libspdk_bdev_passthru.a 00:04:19.478 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:19.478 SO libspdk_bdev_passthru.so.6.0 00:04:19.478 CC module/bdev/aio/bdev_aio.o 00:04:19.478 LIB libspdk_bdev_split.a 00:04:19.478 CC module/bdev/ftl/bdev_ftl.o 00:04:19.478 SYMLINK libspdk_bdev_xnvme.so 00:04:19.478 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:19.478 CC module/bdev/iscsi/bdev_iscsi.o 00:04:19.478 SO libspdk_bdev_split.so.6.0 00:04:19.478 SYMLINK libspdk_bdev_passthru.so 00:04:19.478 CC module/bdev/aio/bdev_aio_rpc.o 00:04:19.735 SYMLINK libspdk_bdev_split.so 00:04:19.736 CC module/bdev/raid/bdev_raid_sb.o 00:04:19.736 LIB libspdk_bdev_zone_block.a 00:04:19.736 SO libspdk_bdev_zone_block.so.6.0 00:04:19.736 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:19.736 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:19.736 CC module/bdev/raid/raid0.o 00:04:19.736 SYMLINK libspdk_bdev_zone_block.so 00:04:19.736 CC module/bdev/raid/raid1.o 00:04:19.736 LIB libspdk_bdev_ftl.a 00:04:19.993 SO libspdk_bdev_ftl.so.6.0 00:04:19.993 LIB libspdk_bdev_aio.a 00:04:19.993 SYMLINK libspdk_bdev_ftl.so 00:04:19.993 SO libspdk_bdev_aio.so.6.0 00:04:19.993 CC module/bdev/raid/concat.o 00:04:19.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:19.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:19.993 SYMLINK libspdk_bdev_aio.so 00:04:20.257 LIB libspdk_bdev_iscsi.a 00:04:20.258 SO libspdk_bdev_iscsi.so.6.0 00:04:20.258 LIB libspdk_bdev_raid.a 00:04:20.258 SYMLINK libspdk_bdev_iscsi.so 00:04:20.258 SO libspdk_bdev_raid.so.6.0 00:04:20.521 LIB libspdk_bdev_virtio.a 00:04:20.521 SO libspdk_bdev_virtio.so.6.0 00:04:20.521 SYMLINK libspdk_bdev_raid.so 00:04:20.521 SYMLINK libspdk_bdev_virtio.so 00:04:21.085 LIB libspdk_bdev_nvme.a 00:04:21.343 SO libspdk_bdev_nvme.so.7.0 00:04:21.343 SYMLINK libspdk_bdev_nvme.so 00:04:21.908 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.908 CC module/event/subsystems/vmd/vmd.o 00:04:21.908 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.908 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.908 CC module/event/subsystems/keyring/keyring.o 00:04:21.908 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.908 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:21.908 CC module/event/subsystems/sock/sock.o 00:04:21.908 LIB libspdk_event_scheduler.a 00:04:21.908 LIB libspdk_event_keyring.a 00:04:22.179 LIB libspdk_event_sock.a 00:04:22.179 LIB libspdk_event_vhost_blk.a 00:04:22.179 LIB libspdk_event_vmd.a 00:04:22.179 SO libspdk_event_scheduler.so.4.0 00:04:22.179 SO libspdk_event_keyring.so.1.0 00:04:22.179 LIB libspdk_event_iobuf.a 00:04:22.179 SO libspdk_event_sock.so.5.0 00:04:22.179 SO libspdk_event_vhost_blk.so.3.0 00:04:22.179 SO libspdk_event_vmd.so.6.0 00:04:22.179 SO libspdk_event_iobuf.so.3.0 00:04:22.179 SYMLINK libspdk_event_keyring.so 00:04:22.179 SYMLINK libspdk_event_scheduler.so 00:04:22.179 SYMLINK libspdk_event_vhost_blk.so 00:04:22.179 SYMLINK libspdk_event_vmd.so 00:04:22.179 SYMLINK libspdk_event_sock.so 00:04:22.179 SYMLINK libspdk_event_iobuf.so 00:04:22.484 CC module/event/subsystems/accel/accel.o 00:04:22.743 LIB libspdk_event_accel.a 00:04:22.743 SO libspdk_event_accel.so.6.0 00:04:22.743 SYMLINK libspdk_event_accel.so 00:04:23.002 CC module/event/subsystems/bdev/bdev.o 00:04:23.261 LIB libspdk_event_bdev.a 00:04:23.261 SO libspdk_event_bdev.so.6.0 00:04:23.261 SYMLINK libspdk_event_bdev.so 00:04:23.518 CC module/event/subsystems/nbd/nbd.o 00:04:23.518 CC module/event/subsystems/ublk/ublk.o 00:04:23.518 CC module/event/subsystems/scsi/scsi.o 00:04:23.518 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:23.518 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:23.776 LIB libspdk_event_nbd.a 00:04:23.776 LIB libspdk_event_ublk.a 00:04:23.776 LIB libspdk_event_scsi.a 00:04:23.776 SO libspdk_event_nbd.so.6.0 00:04:23.776 SO libspdk_event_ublk.so.3.0 00:04:23.776 SO libspdk_event_scsi.so.6.0 00:04:23.776 SYMLINK libspdk_event_nbd.so 00:04:23.776 SYMLINK libspdk_event_ublk.so 00:04:23.776 SYMLINK libspdk_event_scsi.so 00:04:23.776 LIB libspdk_event_nvmf.a 00:04:24.034 SO libspdk_event_nvmf.so.6.0 00:04:24.034 SYMLINK libspdk_event_nvmf.so 00:04:24.034 CC module/event/subsystems/iscsi/iscsi.o 00:04:24.034 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:24.291 LIB libspdk_event_vhost_scsi.a 00:04:24.291 LIB libspdk_event_iscsi.a 00:04:24.291 SO libspdk_event_vhost_scsi.so.3.0 00:04:24.291 SO libspdk_event_iscsi.so.6.0 00:04:24.291 SYMLINK libspdk_event_vhost_scsi.so 00:04:24.291 SYMLINK libspdk_event_iscsi.so 00:04:24.549 SO libspdk.so.6.0 00:04:24.549 SYMLINK libspdk.so 00:04:24.808 TEST_HEADER include/spdk/accel.h 00:04:24.808 TEST_HEADER include/spdk/accel_module.h 00:04:24.808 TEST_HEADER include/spdk/assert.h 00:04:24.808 TEST_HEADER include/spdk/barrier.h 00:04:24.808 TEST_HEADER include/spdk/base64.h 00:04:24.808 TEST_HEADER include/spdk/bdev.h 00:04:24.808 CXX app/trace/trace.o 00:04:24.808 TEST_HEADER include/spdk/bdev_module.h 00:04:24.808 TEST_HEADER include/spdk/bdev_zone.h 00:04:24.808 TEST_HEADER include/spdk/bit_array.h 00:04:24.808 TEST_HEADER include/spdk/bit_pool.h 00:04:24.808 TEST_HEADER include/spdk/blob_bdev.h 00:04:24.808 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:24.808 TEST_HEADER include/spdk/blobfs.h 00:04:24.808 TEST_HEADER include/spdk/blob.h 00:04:24.808 TEST_HEADER include/spdk/conf.h 00:04:24.808 TEST_HEADER include/spdk/config.h 00:04:24.808 TEST_HEADER include/spdk/cpuset.h 00:04:24.808 TEST_HEADER include/spdk/crc16.h 00:04:24.808 TEST_HEADER include/spdk/crc32.h 00:04:24.808 TEST_HEADER include/spdk/crc64.h 00:04:24.808 TEST_HEADER include/spdk/dif.h 00:04:24.808 TEST_HEADER include/spdk/dma.h 00:04:24.808 TEST_HEADER include/spdk/endian.h 00:04:24.808 TEST_HEADER include/spdk/env_dpdk.h 00:04:24.808 TEST_HEADER include/spdk/env.h 00:04:24.808 TEST_HEADER include/spdk/event.h 00:04:24.808 TEST_HEADER include/spdk/fd_group.h 00:04:24.808 TEST_HEADER include/spdk/fd.h 00:04:24.808 TEST_HEADER include/spdk/file.h 00:04:24.808 TEST_HEADER include/spdk/ftl.h 00:04:24.808 TEST_HEADER include/spdk/gpt_spec.h 00:04:24.808 TEST_HEADER include/spdk/hexlify.h 00:04:24.808 TEST_HEADER include/spdk/histogram_data.h 00:04:24.808 TEST_HEADER include/spdk/idxd.h 00:04:24.808 TEST_HEADER include/spdk/idxd_spec.h 00:04:24.808 TEST_HEADER include/spdk/init.h 00:04:24.808 TEST_HEADER include/spdk/ioat.h 00:04:24.808 TEST_HEADER include/spdk/ioat_spec.h 00:04:24.808 TEST_HEADER include/spdk/iscsi_spec.h 00:04:24.808 TEST_HEADER include/spdk/json.h 00:04:24.808 TEST_HEADER include/spdk/jsonrpc.h 00:04:24.808 TEST_HEADER include/spdk/keyring.h 00:04:24.808 TEST_HEADER include/spdk/keyring_module.h 00:04:24.808 CC test/event/event_perf/event_perf.o 00:04:24.808 TEST_HEADER include/spdk/likely.h 00:04:24.808 TEST_HEADER include/spdk/log.h 00:04:24.808 TEST_HEADER include/spdk/lvol.h 00:04:24.808 CC examples/accel/perf/accel_perf.o 00:04:24.808 TEST_HEADER include/spdk/memory.h 00:04:24.808 TEST_HEADER include/spdk/mmio.h 00:04:24.808 TEST_HEADER include/spdk/nbd.h 00:04:24.808 TEST_HEADER include/spdk/notify.h 00:04:24.808 TEST_HEADER include/spdk/nvme.h 00:04:24.808 TEST_HEADER include/spdk/nvme_intel.h 00:04:24.808 CC test/accel/dif/dif.o 00:04:24.808 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:24.808 CC test/blobfs/mkfs/mkfs.o 00:04:24.808 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:24.808 CC test/bdev/bdevio/bdevio.o 00:04:25.067 TEST_HEADER include/spdk/nvme_spec.h 00:04:25.067 TEST_HEADER include/spdk/nvme_zns.h 00:04:25.067 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:25.067 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:25.067 TEST_HEADER include/spdk/nvmf.h 00:04:25.067 TEST_HEADER include/spdk/nvmf_spec.h 00:04:25.067 CC test/app/bdev_svc/bdev_svc.o 00:04:25.067 CC test/dma/test_dma/test_dma.o 00:04:25.067 TEST_HEADER include/spdk/nvmf_transport.h 00:04:25.067 TEST_HEADER include/spdk/opal.h 00:04:25.067 TEST_HEADER include/spdk/opal_spec.h 00:04:25.067 TEST_HEADER include/spdk/pci_ids.h 00:04:25.067 TEST_HEADER include/spdk/pipe.h 00:04:25.067 TEST_HEADER include/spdk/queue.h 00:04:25.067 TEST_HEADER include/spdk/reduce.h 00:04:25.067 TEST_HEADER include/spdk/rpc.h 00:04:25.067 TEST_HEADER include/spdk/scheduler.h 00:04:25.067 TEST_HEADER include/spdk/scsi.h 00:04:25.067 TEST_HEADER include/spdk/scsi_spec.h 00:04:25.067 TEST_HEADER include/spdk/sock.h 00:04:25.067 TEST_HEADER include/spdk/stdinc.h 00:04:25.067 TEST_HEADER include/spdk/string.h 00:04:25.067 CC test/env/mem_callbacks/mem_callbacks.o 00:04:25.067 TEST_HEADER include/spdk/thread.h 00:04:25.067 TEST_HEADER include/spdk/trace.h 00:04:25.067 TEST_HEADER include/spdk/trace_parser.h 00:04:25.067 TEST_HEADER include/spdk/tree.h 00:04:25.067 TEST_HEADER include/spdk/ublk.h 00:04:25.067 TEST_HEADER include/spdk/util.h 00:04:25.067 TEST_HEADER include/spdk/uuid.h 00:04:25.067 TEST_HEADER include/spdk/version.h 00:04:25.067 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:25.067 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:25.067 TEST_HEADER include/spdk/vhost.h 00:04:25.067 TEST_HEADER include/spdk/vmd.h 00:04:25.067 TEST_HEADER include/spdk/xor.h 00:04:25.067 TEST_HEADER include/spdk/zipf.h 00:04:25.067 CXX test/cpp_headers/accel.o 00:04:25.067 LINK event_perf 00:04:25.326 LINK bdev_svc 00:04:25.326 LINK mkfs 00:04:25.326 LINK spdk_trace 00:04:25.326 CXX test/cpp_headers/accel_module.o 00:04:25.326 CC test/event/reactor/reactor.o 00:04:25.326 LINK bdevio 00:04:25.326 LINK dif 00:04:25.584 LINK test_dma 00:04:25.584 CXX test/cpp_headers/assert.o 00:04:25.584 LINK accel_perf 00:04:25.584 LINK reactor 00:04:25.584 CC test/app/histogram_perf/histogram_perf.o 00:04:25.584 CC app/trace_record/trace_record.o 00:04:25.584 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:25.585 CXX test/cpp_headers/barrier.o 00:04:25.585 LINK mem_callbacks 00:04:25.843 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:25.843 LINK histogram_perf 00:04:25.843 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.843 CC test/event/reactor_perf/reactor_perf.o 00:04:25.843 CXX test/cpp_headers/base64.o 00:04:25.843 LINK spdk_trace_record 00:04:25.843 CC test/env/vtophys/vtophys.o 00:04:25.843 CXX test/cpp_headers/bdev.o 00:04:25.843 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:25.843 CC examples/blob/hello_world/hello_blob.o 00:04:25.843 LINK reactor_perf 00:04:25.843 CC examples/bdev/hello_world/hello_bdev.o 00:04:26.102 LINK vtophys 00:04:26.102 CXX test/cpp_headers/bdev_module.o 00:04:26.102 CC test/event/app_repeat/app_repeat.o 00:04:26.102 LINK nvme_fuzz 00:04:26.102 LINK hello_bdev 00:04:26.102 LINK hello_blob 00:04:26.360 CC app/nvmf_tgt/nvmf_main.o 00:04:26.360 CC examples/ioat/perf/perf.o 00:04:26.360 LINK app_repeat 00:04:26.360 CXX test/cpp_headers/bdev_zone.o 00:04:26.360 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:26.360 LINK vhost_fuzz 00:04:26.360 CC test/app/jsoncat/jsoncat.o 00:04:26.619 LINK nvmf_tgt 00:04:26.619 CC examples/blob/cli/blobcli.o 00:04:26.619 CXX test/cpp_headers/bit_array.o 00:04:26.619 CC examples/bdev/bdevperf/bdevperf.o 00:04:26.619 CXX test/cpp_headers/bit_pool.o 00:04:26.619 LINK ioat_perf 00:04:26.619 LINK env_dpdk_post_init 00:04:26.619 LINK jsoncat 00:04:26.619 CC test/event/scheduler/scheduler.o 00:04:26.878 CXX test/cpp_headers/blob_bdev.o 00:04:26.878 CC examples/ioat/verify/verify.o 00:04:26.878 CC test/app/stub/stub.o 00:04:26.878 CC app/iscsi_tgt/iscsi_tgt.o 00:04:26.878 CC test/env/memory/memory_ut.o 00:04:26.878 LINK scheduler 00:04:26.878 CC app/spdk_tgt/spdk_tgt.o 00:04:27.137 CXX test/cpp_headers/blobfs_bdev.o 00:04:27.137 LINK stub 00:04:27.137 LINK iscsi_tgt 00:04:27.137 LINK verify 00:04:27.137 CXX test/cpp_headers/blobfs.o 00:04:27.137 LINK blobcli 00:04:27.137 LINK spdk_tgt 00:04:27.396 CXX test/cpp_headers/blob.o 00:04:27.396 CC app/spdk_lspci/spdk_lspci.o 00:04:27.396 CC app/spdk_nvme_perf/perf.o 00:04:27.396 CC test/env/pci/pci_ut.o 00:04:27.396 LINK spdk_lspci 00:04:27.396 CC test/lvol/esnap/esnap.o 00:04:27.396 CXX test/cpp_headers/conf.o 00:04:27.396 CC app/spdk_nvme_identify/identify.o 00:04:27.658 CC examples/nvme/hello_world/hello_world.o 00:04:27.658 LINK bdevperf 00:04:27.658 CXX test/cpp_headers/config.o 00:04:27.658 CXX test/cpp_headers/cpuset.o 00:04:27.658 CC examples/nvme/reconnect/reconnect.o 00:04:27.658 CXX test/cpp_headers/crc16.o 00:04:27.918 LINK hello_world 00:04:27.918 LINK memory_ut 00:04:27.918 LINK pci_ut 00:04:27.918 LINK iscsi_fuzz 00:04:27.918 CXX test/cpp_headers/crc32.o 00:04:27.918 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:27.918 CXX test/cpp_headers/crc64.o 00:04:28.176 CXX test/cpp_headers/dif.o 00:04:28.176 LINK reconnect 00:04:28.176 CC test/rpc_client/rpc_client_test.o 00:04:28.176 CC test/nvme/aer/aer.o 00:04:28.435 CXX test/cpp_headers/dma.o 00:04:28.435 CC examples/sock/hello_world/hello_sock.o 00:04:28.435 CC examples/vmd/lsvmd/lsvmd.o 00:04:28.435 LINK spdk_nvme_perf 00:04:28.435 LINK rpc_client_test 00:04:28.435 CXX test/cpp_headers/endian.o 00:04:28.435 CC test/thread/poller_perf/poller_perf.o 00:04:28.435 LINK lsvmd 00:04:28.694 LINK spdk_nvme_identify 00:04:28.694 CXX test/cpp_headers/env_dpdk.o 00:04:28.694 LINK aer 00:04:28.694 CXX test/cpp_headers/env.o 00:04:28.694 LINK nvme_manage 00:04:28.694 LINK poller_perf 00:04:28.694 LINK hello_sock 00:04:28.694 CXX test/cpp_headers/event.o 00:04:28.694 CC examples/vmd/led/led.o 00:04:28.694 CXX test/cpp_headers/fd_group.o 00:04:28.952 CC app/spdk_nvme_discover/discovery_aer.o 00:04:28.952 CC test/nvme/reset/reset.o 00:04:28.952 CC examples/nvme/arbitration/arbitration.o 00:04:28.952 CXX test/cpp_headers/fd.o 00:04:28.952 CXX test/cpp_headers/file.o 00:04:28.952 CC examples/nvme/hotplug/hotplug.o 00:04:28.952 LINK led 00:04:28.952 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:28.952 CXX test/cpp_headers/ftl.o 00:04:28.953 LINK spdk_nvme_discover 00:04:28.953 CC examples/nvme/abort/abort.o 00:04:29.210 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:29.210 LINK reset 00:04:29.210 LINK hotplug 00:04:29.210 LINK cmb_copy 00:04:29.210 LINK arbitration 00:04:29.210 CXX test/cpp_headers/gpt_spec.o 00:04:29.210 LINK pmr_persistence 00:04:29.468 CC app/spdk_top/spdk_top.o 00:04:29.468 CXX test/cpp_headers/hexlify.o 00:04:29.468 CC examples/nvmf/nvmf/nvmf.o 00:04:29.468 CC test/nvme/sgl/sgl.o 00:04:29.468 CC test/nvme/e2edp/nvme_dp.o 00:04:29.468 LINK abort 00:04:29.468 CXX test/cpp_headers/histogram_data.o 00:04:29.468 CC test/nvme/err_injection/err_injection.o 00:04:29.468 CC test/nvme/overhead/overhead.o 00:04:29.726 CC test/nvme/startup/startup.o 00:04:29.726 LINK nvmf 00:04:29.726 CXX test/cpp_headers/idxd.o 00:04:29.726 CXX test/cpp_headers/idxd_spec.o 00:04:29.726 LINK sgl 00:04:29.726 LINK err_injection 00:04:29.726 LINK startup 00:04:29.984 LINK nvme_dp 00:04:29.984 LINK overhead 00:04:29.984 CXX test/cpp_headers/init.o 00:04:29.984 CC test/nvme/reserve/reserve.o 00:04:29.984 CXX test/cpp_headers/ioat.o 00:04:29.984 CC app/vhost/vhost.o 00:04:30.243 CC app/spdk_dd/spdk_dd.o 00:04:30.243 CC examples/util/zipf/zipf.o 00:04:30.243 CC app/fio/nvme/fio_plugin.o 00:04:30.243 CC test/nvme/simple_copy/simple_copy.o 00:04:30.243 CXX test/cpp_headers/ioat_spec.o 00:04:30.243 CC app/fio/bdev/fio_plugin.o 00:04:30.243 LINK vhost 00:04:30.243 LINK zipf 00:04:30.502 LINK reserve 00:04:30.502 CXX test/cpp_headers/iscsi_spec.o 00:04:30.502 LINK simple_copy 00:04:30.502 LINK spdk_top 00:04:30.502 LINK spdk_dd 00:04:30.761 CXX test/cpp_headers/json.o 00:04:30.761 CC test/nvme/connect_stress/connect_stress.o 00:04:30.761 CXX test/cpp_headers/jsonrpc.o 00:04:30.761 CXX test/cpp_headers/keyring.o 00:04:30.761 CC examples/idxd/perf/perf.o 00:04:30.761 CC examples/thread/thread/thread_ex.o 00:04:30.761 CC test/nvme/boot_partition/boot_partition.o 00:04:31.020 LINK connect_stress 00:04:31.020 CXX test/cpp_headers/keyring_module.o 00:04:31.020 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.020 CXX test/cpp_headers/likely.o 00:04:31.020 LINK spdk_bdev 00:04:31.020 LINK spdk_nvme 00:04:31.020 LINK boot_partition 00:04:31.020 LINK thread 00:04:31.278 CC test/nvme/compliance/nvme_compliance.o 00:04:31.278 CXX test/cpp_headers/log.o 00:04:31.278 LINK interrupt_tgt 00:04:31.278 CC test/nvme/fused_ordering/fused_ordering.o 00:04:31.278 CC test/nvme/fdp/fdp.o 00:04:31.278 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:31.278 LINK idxd_perf 00:04:31.278 CC test/nvme/cuse/cuse.o 00:04:31.278 CXX test/cpp_headers/lvol.o 00:04:31.278 CXX test/cpp_headers/memory.o 00:04:31.278 CXX test/cpp_headers/mmio.o 00:04:31.536 CXX test/cpp_headers/nbd.o 00:04:31.536 CXX test/cpp_headers/notify.o 00:04:31.536 LINK doorbell_aers 00:04:31.536 LINK fused_ordering 00:04:31.536 CXX test/cpp_headers/nvme.o 00:04:31.536 CXX test/cpp_headers/nvme_intel.o 00:04:31.536 CXX test/cpp_headers/nvme_ocssd.o 00:04:31.536 LINK nvme_compliance 00:04:31.794 LINK fdp 00:04:31.794 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:31.794 CXX test/cpp_headers/nvme_spec.o 00:04:31.794 CXX test/cpp_headers/nvme_zns.o 00:04:31.794 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.794 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.794 CXX test/cpp_headers/nvmf.o 00:04:31.794 CXX test/cpp_headers/nvmf_spec.o 00:04:31.794 CXX test/cpp_headers/nvmf_transport.o 00:04:31.794 CXX test/cpp_headers/opal.o 00:04:31.794 CXX test/cpp_headers/opal_spec.o 00:04:31.794 CXX test/cpp_headers/pci_ids.o 00:04:32.052 CXX test/cpp_headers/pipe.o 00:04:32.052 CXX test/cpp_headers/queue.o 00:04:32.052 CXX test/cpp_headers/reduce.o 00:04:32.052 CXX test/cpp_headers/rpc.o 00:04:32.052 CXX test/cpp_headers/scheduler.o 00:04:32.052 CXX test/cpp_headers/scsi.o 00:04:32.052 CXX test/cpp_headers/scsi_spec.o 00:04:32.052 CXX test/cpp_headers/sock.o 00:04:32.052 CXX test/cpp_headers/stdinc.o 00:04:32.052 CXX test/cpp_headers/string.o 00:04:32.052 CXX test/cpp_headers/thread.o 00:04:32.052 CXX test/cpp_headers/trace.o 00:04:32.310 CXX test/cpp_headers/trace_parser.o 00:04:32.310 CXX test/cpp_headers/tree.o 00:04:32.310 CXX test/cpp_headers/ublk.o 00:04:32.310 CXX test/cpp_headers/util.o 00:04:32.310 CXX test/cpp_headers/uuid.o 00:04:32.310 CXX test/cpp_headers/version.o 00:04:32.310 CXX test/cpp_headers/vfio_user_pci.o 00:04:32.310 CXX test/cpp_headers/vfio_user_spec.o 00:04:32.310 CXX test/cpp_headers/vhost.o 00:04:32.310 CXX test/cpp_headers/vmd.o 00:04:32.310 CXX test/cpp_headers/xor.o 00:04:32.310 CXX test/cpp_headers/zipf.o 00:04:32.569 LINK cuse 00:04:33.947 LINK esnap 00:04:34.515 ************************************ 00:04:34.515 END TEST make 00:04:34.515 ************************************ 00:04:34.515 00:04:34.515 real 1m1.885s 00:04:34.515 user 5m48.280s 00:04:34.515 sys 1m7.185s 00:04:34.515 02:51:20 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:34.515 02:51:20 make -- common/autotest_common.sh@10 -- $ set +x 00:04:34.515 02:51:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:34.515 02:51:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:34.515 02:51:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:34.515 02:51:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.515 02:51:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:34.515 02:51:20 -- pm/common@44 -- $ pid=5959 00:04:34.515 02:51:20 -- pm/common@50 -- $ kill -TERM 5959 00:04:34.515 02:51:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.515 02:51:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:34.515 02:51:20 -- pm/common@44 -- $ pid=5961 00:04:34.515 02:51:20 -- pm/common@50 -- $ kill -TERM 5961 00:04:34.515 02:51:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.515 02:51:20 -- nvmf/common.sh@7 -- # uname -s 00:04:34.515 02:51:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.515 02:51:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.515 02:51:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.515 02:51:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.515 02:51:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.515 02:51:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.515 02:51:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.515 02:51:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.515 02:51:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.515 02:51:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.515 02:51:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:756e7923-3988-42eb-8b30-1352b2ea50fd 00:04:34.515 02:51:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=756e7923-3988-42eb-8b30-1352b2ea50fd 00:04:34.515 02:51:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.515 02:51:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.515 02:51:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.515 02:51:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.515 02:51:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.515 02:51:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.516 02:51:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.516 02:51:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.516 02:51:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.516 02:51:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.516 02:51:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.516 02:51:20 -- paths/export.sh@5 -- # export PATH 00:04:34.516 02:51:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.516 02:51:20 -- nvmf/common.sh@47 -- # : 0 00:04:34.516 02:51:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:34.516 02:51:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:34.516 02:51:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.516 02:51:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.516 02:51:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.516 02:51:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:34.516 02:51:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:34.516 02:51:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:34.516 02:51:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:34.516 02:51:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:34.516 02:51:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:34.516 02:51:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:34.516 02:51:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:34.516 02:51:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:34.516 02:51:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:34.516 02:51:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:34.775 02:51:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:34.775 02:51:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:34.775 02:51:20 -- spdk/autotest.sh@48 -- # udevadm_pid=66937 00:04:34.775 02:51:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:34.775 02:51:20 -- pm/common@17 -- # local monitor 00:04:34.775 02:51:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:34.775 02:51:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.775 02:51:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.775 02:51:20 -- pm/common@25 -- # sleep 1 00:04:34.775 02:51:20 -- pm/common@21 -- # date +%s 00:04:34.775 02:51:20 -- pm/common@21 -- # date +%s 00:04:34.775 02:51:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715655080 00:04:34.775 02:51:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715655080 00:04:34.775 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715655080_collect-vmstat.pm.log 00:04:34.775 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715655080_collect-cpu-load.pm.log 00:04:35.713 02:51:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:35.713 02:51:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:35.713 02:51:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:35.713 02:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:35.713 02:51:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:35.713 02:51:21 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:35.713 02:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:35.713 02:51:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:35.713 02:51:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:35.713 02:51:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:35.713 02:51:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:35.713 02:51:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:35.713 02:51:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:35.713 02:51:21 -- common/autotest_common.sh@1451 -- # uname 00:04:35.713 02:51:21 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:35.713 02:51:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:35.713 02:51:21 -- common/autotest_common.sh@1471 -- # uname 00:04:35.713 02:51:21 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:35.713 02:51:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:35.713 02:51:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:35.713 02:51:21 -- spdk/autotest.sh@72 -- # hash lcov 00:04:35.713 02:51:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:35.713 02:51:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:35.713 --rc lcov_branch_coverage=1 00:04:35.713 --rc lcov_function_coverage=1 00:04:35.713 --rc genhtml_branch_coverage=1 00:04:35.713 --rc genhtml_function_coverage=1 00:04:35.713 --rc genhtml_legend=1 00:04:35.713 --rc geninfo_all_blocks=1 00:04:35.713 ' 00:04:35.713 02:51:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:35.713 --rc lcov_branch_coverage=1 00:04:35.713 --rc lcov_function_coverage=1 00:04:35.713 --rc genhtml_branch_coverage=1 00:04:35.713 --rc genhtml_function_coverage=1 00:04:35.713 --rc genhtml_legend=1 00:04:35.713 --rc geninfo_all_blocks=1 00:04:35.713 ' 00:04:35.713 02:51:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:35.713 --rc lcov_branch_coverage=1 00:04:35.713 --rc lcov_function_coverage=1 00:04:35.713 --rc genhtml_branch_coverage=1 00:04:35.713 --rc genhtml_function_coverage=1 00:04:35.713 --rc genhtml_legend=1 00:04:35.713 --rc geninfo_all_blocks=1 00:04:35.713 --no-external' 00:04:35.713 02:51:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:35.713 --rc lcov_branch_coverage=1 00:04:35.713 --rc lcov_function_coverage=1 00:04:35.713 --rc genhtml_branch_coverage=1 00:04:35.713 --rc genhtml_function_coverage=1 00:04:35.713 --rc genhtml_legend=1 00:04:35.713 --rc geninfo_all_blocks=1 00:04:35.713 --no-external' 00:04:35.713 02:51:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:35.973 lcov: LCOV version 1.14 00:04:35.973 02:51:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:44.094 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:44.094 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:44.094 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:44.094 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:44.094 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:44.094 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:49.363 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:49.363 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:59.342 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:59.342 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:59.343 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:59.343 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:59.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:59.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:59.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:59.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:59.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:59.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:02.971 02:51:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:02.971 02:51:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:02.971 02:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.971 02:51:48 -- spdk/autotest.sh@91 -- # rm -f 00:05:02.971 02:51:48 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.539 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:03.539 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:03.539 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:03.539 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:03.539 02:51:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:03.539 02:51:49 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:03.539 02:51:49 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:03.539 02:51:49 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.539 02:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:03.539 02:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:03.539 02:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.539 02:51:49 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:03.539 02:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.539 02:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:03.539 02:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:03.539 02:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:03.539 02:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:03.539 No valid GPT data, bailing 00:05:03.539 02:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.799 02:51:49 -- scripts/common.sh@391 -- # pt= 00:05:03.799 02:51:49 -- scripts/common.sh@392 -- # return 1 00:05:03.799 02:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:03.799 1+0 records in 00:05:03.799 1+0 records out 00:05:03.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121507 s, 86.3 MB/s 00:05:03.799 02:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.799 02:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:03.799 02:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:03.799 02:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:03.799 02:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:03.799 No valid GPT data, bailing 00:05:03.799 02:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:03.799 02:51:49 -- scripts/common.sh@391 -- # pt= 00:05:03.799 02:51:49 -- scripts/common.sh@392 -- # return 1 00:05:03.799 02:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:03.799 1+0 records in 00:05:03.799 1+0 records out 00:05:03.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466422 s, 225 MB/s 00:05:03.799 02:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.799 02:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:03.799 02:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:03.799 02:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:03.799 02:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:03.799 No valid GPT data, bailing 00:05:03.799 02:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:03.799 02:51:49 -- scripts/common.sh@391 -- # pt= 00:05:03.799 02:51:49 -- scripts/common.sh@392 -- # return 1 00:05:03.799 02:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:03.799 1+0 records in 00:05:03.799 1+0 records out 00:05:03.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440628 s, 238 MB/s 00:05:03.799 02:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.799 02:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:03.799 02:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:03.799 02:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:03.799 02:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:03.799 No valid GPT data, bailing 00:05:03.799 02:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:04.057 02:51:49 -- scripts/common.sh@391 -- # pt= 00:05:04.057 02:51:49 -- scripts/common.sh@392 -- # return 1 00:05:04.057 02:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:04.057 1+0 records in 00:05:04.057 1+0 records out 00:05:04.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00367512 s, 285 MB/s 00:05:04.057 02:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.057 02:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.057 02:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:04.057 02:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:04.057 02:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:04.057 No valid GPT data, bailing 00:05:04.057 02:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:04.057 02:51:49 -- scripts/common.sh@391 -- # pt= 00:05:04.057 02:51:49 -- scripts/common.sh@392 -- # return 1 00:05:04.057 02:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:04.057 1+0 records in 00:05:04.057 1+0 records out 00:05:04.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460223 s, 228 MB/s 00:05:04.057 02:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.057 02:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.057 02:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:04.057 02:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:04.057 02:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:04.057 No valid GPT data, bailing 00:05:04.057 02:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:04.057 02:51:49 -- scripts/common.sh@391 -- # pt= 00:05:04.057 02:51:49 -- scripts/common.sh@392 -- # return 1 00:05:04.058 02:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:04.058 1+0 records in 00:05:04.058 1+0 records out 00:05:04.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427663 s, 245 MB/s 00:05:04.058 02:51:49 -- spdk/autotest.sh@118 -- # sync 00:05:04.624 02:51:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.624 02:51:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.624 02:51:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:06.525 02:51:52 -- spdk/autotest.sh@124 -- # uname -s 00:05:06.526 02:51:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:06.526 02:51:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:06.526 02:51:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.526 02:51:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.526 02:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:06.526 ************************************ 00:05:06.526 START TEST setup.sh 00:05:06.526 ************************************ 00:05:06.526 02:51:52 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:06.526 * Looking for test storage... 00:05:06.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.526 02:51:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:06.526 02:51:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:06.526 02:51:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:06.526 02:51:52 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.526 02:51:52 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.526 02:51:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.526 ************************************ 00:05:06.526 START TEST acl 00:05:06.526 ************************************ 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:06.526 * Looking for test storage... 00:05:06.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.526 02:51:52 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:06.526 02:51:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.526 02:51:52 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:06.526 02:51:52 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:06.526 02:51:52 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:06.526 02:51:52 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:06.526 02:51:52 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:06.526 02:51:52 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.526 02:51:52 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.902 02:51:53 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:07.902 02:51:53 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:07.902 02:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.902 02:51:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:07.902 02:51:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.902 02:51:53 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:08.161 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:08.161 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.161 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.729 Hugepages 00:05:08.729 node hugesize free / total 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.729 00:05:08.729 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.729 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:08.989 02:51:54 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:08.989 02:51:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.989 02:51:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.989 02:51:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:08.989 ************************************ 00:05:08.989 START TEST denied 00:05:08.989 ************************************ 00:05:08.989 02:51:54 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:08.989 02:51:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:08.989 02:51:54 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:08.989 02:51:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:08.989 02:51:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.989 02:51:54 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:10.368 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.368 02:51:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.936 00:05:16.936 real 0m7.185s 00:05:16.936 user 0m0.845s 00:05:16.936 sys 0m1.383s 00:05:16.936 02:52:02 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.936 ************************************ 00:05:16.936 END TEST denied 00:05:16.936 02:52:02 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:16.936 ************************************ 00:05:16.936 02:52:02 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:16.936 02:52:02 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.936 02:52:02 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.936 02:52:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:16.936 ************************************ 00:05:16.936 START TEST allowed 00:05:16.936 ************************************ 00:05:16.936 02:52:02 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:16.936 02:52:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:16.936 02:52:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:16.936 02:52:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.936 02:52:02 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.936 02:52:02 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:17.505 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.505 02:52:03 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.439 00:05:18.439 real 0m2.232s 00:05:18.439 user 0m1.061s 00:05:18.439 sys 0m1.166s 00:05:18.439 02:52:04 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.439 ************************************ 00:05:18.439 END TEST allowed 00:05:18.439 ************************************ 00:05:18.439 02:52:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:18.439 ************************************ 00:05:18.439 END TEST acl 00:05:18.439 ************************************ 00:05:18.439 00:05:18.439 real 0m12.097s 00:05:18.439 user 0m3.141s 00:05:18.439 sys 0m3.986s 00:05:18.439 02:52:04 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.439 02:52:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:18.439 02:52:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:18.439 02:52:04 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:18.439 02:52:04 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.439 02:52:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.700 ************************************ 00:05:18.700 START TEST hugepages 00:05:18.700 ************************************ 00:05:18.700 02:52:04 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:18.700 * Looking for test storage... 00:05:18.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 3968252 kB' 'MemAvailable: 7370220 kB' 'Buffers: 2696 kB' 'Cached: 3602816 kB' 'SwapCached: 0 kB' 'Active: 838320 kB' 'Inactive: 2868532 kB' 'Active(anon): 111856 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 103064 kB' 'Mapped: 48824 kB' 'Shmem: 10516 kB' 'KReclaimable: 87824 kB' 'Slab: 168500 kB' 'SReclaimable: 87824 kB' 'SUnreclaim: 80676 kB' 'KernelStack: 6444 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 325868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.700 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:18.701 02:52:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.702 02:52:04 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:18.702 02:52:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:18.702 02:52:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.702 02:52:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.702 ************************************ 00:05:18.702 START TEST default_setup 00:05:18.702 ************************************ 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.702 02:52:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.839 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.839 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.839 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6079424 kB' 'MemAvailable: 9481116 kB' 'Buffers: 2696 kB' 'Cached: 3602796 kB' 'SwapCached: 0 kB' 'Active: 856196 kB' 'Inactive: 2868528 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120908 kB' 'Mapped: 48864 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167816 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80536 kB' 'KernelStack: 6512 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.104 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6079424 kB' 'MemAvailable: 9481116 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855636 kB' 'Inactive: 2868528 kB' 'Active(anon): 129172 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120284 kB' 'Mapped: 48916 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167816 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80536 kB' 'KernelStack: 6432 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.105 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.106 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6079172 kB' 'MemAvailable: 9480864 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855760 kB' 'Inactive: 2868528 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120392 kB' 'Mapped: 48916 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167820 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80540 kB' 'KernelStack: 6432 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.107 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.108 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.109 02:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.109 nr_hugepages=1024 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.109 resv_hugepages=0 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.109 surplus_hugepages=0 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.109 anon_hugepages=0 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.109 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6079172 kB' 'MemAvailable: 9480888 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855512 kB' 'Inactive: 2868552 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120168 kB' 'Mapped: 48796 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167812 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80532 kB' 'KernelStack: 6448 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.110 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.111 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6079808 kB' 'MemUsed: 6162164 kB' 'SwapCached: 0 kB' 'Active: 855512 kB' 'Inactive: 2868552 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 3605496 kB' 'Mapped: 48800 kB' 'AnonPages: 119968 kB' 'Shmem: 10476 kB' 'KernelStack: 6416 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87280 kB' 'Slab: 167812 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.112 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.113 node0=1024 expecting 1024 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.113 00:05:20.113 real 0m1.445s 00:05:20.113 user 0m0.656s 00:05:20.113 sys 0m0.731s 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.113 02:52:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:20.113 ************************************ 00:05:20.113 END TEST default_setup 00:05:20.113 ************************************ 00:05:20.113 02:52:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:20.113 02:52:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.113 02:52:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.113 02:52:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.113 ************************************ 00:05:20.113 START TEST per_node_1G_alloc 00:05:20.113 ************************************ 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.113 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.683 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.683 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.683 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7128392 kB' 'MemAvailable: 10530112 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 856384 kB' 'Inactive: 2868556 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121040 kB' 'Mapped: 48924 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167916 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80636 kB' 'KernelStack: 6456 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.683 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.684 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7128644 kB' 'MemAvailable: 10530364 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855640 kB' 'Inactive: 2868556 kB' 'Active(anon): 129176 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120276 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167916 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80636 kB' 'KernelStack: 6448 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.685 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.949 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.950 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7128896 kB' 'MemAvailable: 10530616 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855656 kB' 'Inactive: 2868556 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120312 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167908 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80628 kB' 'KernelStack: 6464 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.951 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.952 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.953 nr_hugepages=512 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:20.953 resv_hugepages=0 00:05:20.953 surplus_hugepages=0 00:05:20.953 anon_hugepages=0 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.953 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7128896 kB' 'MemAvailable: 10530616 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855640 kB' 'Inactive: 2868556 kB' 'Active(anon): 129176 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120312 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167908 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80628 kB' 'KernelStack: 6464 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.954 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.955 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7130456 kB' 'MemUsed: 5111516 kB' 'SwapCached: 0 kB' 'Active: 855696 kB' 'Inactive: 2868556 kB' 'Active(anon): 129232 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 3605496 kB' 'Mapped: 48788 kB' 'AnonPages: 120384 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87280 kB' 'Slab: 167908 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.956 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:20.957 node0=512 expecting 512 00:05:20.957 ************************************ 00:05:20.957 END TEST per_node_1G_alloc 00:05:20.957 ************************************ 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:20.957 00:05:20.957 real 0m0.739s 00:05:20.957 user 0m0.344s 00:05:20.957 sys 0m0.408s 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.957 02:52:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.957 02:52:06 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:20.957 02:52:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.957 02:52:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.957 02:52:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.957 ************************************ 00:05:20.957 START TEST even_2G_alloc 00:05:20.957 ************************************ 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.957 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.958 02:52:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.529 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.529 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.529 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.529 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081040 kB' 'MemAvailable: 9482760 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 856100 kB' 'Inactive: 2868556 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120812 kB' 'Mapped: 49176 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167868 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80588 kB' 'KernelStack: 6568 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.529 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.530 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081040 kB' 'MemAvailable: 9482760 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855556 kB' 'Inactive: 2868556 kB' 'Active(anon): 129092 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120152 kB' 'Mapped: 48924 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167900 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80620 kB' 'KernelStack: 6496 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.531 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.532 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081040 kB' 'MemAvailable: 9482760 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 855728 kB' 'Inactive: 2868556 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120356 kB' 'Mapped: 48796 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167900 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80620 kB' 'KernelStack: 6480 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.533 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.795 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.796 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.797 nr_hugepages=1024 00:05:21.797 resv_hugepages=0 00:05:21.797 surplus_hugepages=0 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.797 anon_hugepages=0 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081040 kB' 'MemAvailable: 9482760 kB' 'Buffers: 2696 kB' 'Cached: 3602800 kB' 'SwapCached: 0 kB' 'Active: 856064 kB' 'Inactive: 2868556 kB' 'Active(anon): 129600 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120844 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167900 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80620 kB' 'KernelStack: 6528 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.797 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.798 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6082608 kB' 'MemUsed: 6159364 kB' 'SwapCached: 0 kB' 'Active: 855600 kB' 'Inactive: 2868556 kB' 'Active(anon): 129136 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3605496 kB' 'Mapped: 48788 kB' 'AnonPages: 120316 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87280 kB' 'Slab: 167892 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.799 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.800 node0=1024 expecting 1024 00:05:21.800 ************************************ 00:05:21.800 END TEST even_2G_alloc 00:05:21.800 ************************************ 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.800 00:05:21.800 real 0m0.751s 00:05:21.800 user 0m0.351s 00:05:21.800 sys 0m0.418s 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.800 02:52:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.800 02:52:07 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:21.800 02:52:07 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.800 02:52:07 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.800 02:52:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.800 ************************************ 00:05:21.800 START TEST odd_alloc 00:05:21.800 ************************************ 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.800 02:52:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.320 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.320 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.320 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.320 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081680 kB' 'MemAvailable: 9483404 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 855876 kB' 'Inactive: 2868560 kB' 'Active(anon): 129412 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 120472 kB' 'Mapped: 48932 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167896 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80616 kB' 'KernelStack: 6436 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.320 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081680 kB' 'MemAvailable: 9483404 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 855588 kB' 'Inactive: 2868560 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 120224 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167916 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80636 kB' 'KernelStack: 6448 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.321 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.322 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081680 kB' 'MemAvailable: 9483404 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 855596 kB' 'Inactive: 2868560 kB' 'Active(anon): 129132 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 120228 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167900 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80620 kB' 'KernelStack: 6448 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.585 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:22.586 nr_hugepages=1025 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:22.586 resv_hugepages=0 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.586 surplus_hugepages=0 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.586 anon_hugepages=0 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081680 kB' 'MemAvailable: 9483404 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 855616 kB' 'Inactive: 2868560 kB' 'Active(anon): 129152 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 120268 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 87280 kB' 'Slab: 167896 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80616 kB' 'KernelStack: 6464 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.586 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.587 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6081680 kB' 'MemUsed: 6160292 kB' 'SwapCached: 0 kB' 'Active: 855736 kB' 'Inactive: 2868560 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 3605500 kB' 'Mapped: 48788 kB' 'AnonPages: 120376 kB' 'Shmem: 10476 kB' 'KernelStack: 6432 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87280 kB' 'Slab: 167884 kB' 'SReclaimable: 87280 kB' 'SUnreclaim: 80604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.588 node0=1025 expecting 1025 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:22.588 00:05:22.588 real 0m0.743s 00:05:22.588 user 0m0.346s 00:05:22.588 sys 0m0.415s 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.588 02:52:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:22.588 ************************************ 00:05:22.588 END TEST odd_alloc 00:05:22.588 ************************************ 00:05:22.588 02:52:08 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:22.588 02:52:08 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.588 02:52:08 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.588 02:52:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.588 ************************************ 00:05:22.588 START TEST custom_alloc 00:05:22.588 ************************************ 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:22.588 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.589 02:52:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.160 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.160 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.160 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.160 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:23.160 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7132780 kB' 'MemAvailable: 10534504 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852768 kB' 'Inactive: 2868560 kB' 'Active(anon): 126304 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 117144 kB' 'Mapped: 48148 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167752 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80476 kB' 'KernelStack: 6352 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.161 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7132528 kB' 'MemAvailable: 10534252 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852372 kB' 'Inactive: 2868560 kB' 'Active(anon): 125908 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 116968 kB' 'Mapped: 48048 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167752 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80476 kB' 'KernelStack: 6352 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.162 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.163 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7132528 kB' 'MemAvailable: 10534252 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852236 kB' 'Inactive: 2868560 kB' 'Active(anon): 125772 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 117136 kB' 'Mapped: 48048 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167752 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80476 kB' 'KernelStack: 6384 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 334676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.164 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.165 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:23.166 nr_hugepages=512 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:23.166 resv_hugepages=0 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.166 surplus_hugepages=0 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.166 anon_hugepages=0 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7132528 kB' 'MemAvailable: 10534252 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852144 kB' 'Inactive: 2868560 kB' 'Active(anon): 125680 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 116764 kB' 'Mapped: 48048 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167728 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80452 kB' 'KernelStack: 6288 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.166 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.167 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.427 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7132528 kB' 'MemUsed: 5109444 kB' 'SwapCached: 0 kB' 'Active: 851964 kB' 'Inactive: 2868560 kB' 'Active(anon): 125500 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3605500 kB' 'Mapped: 48048 kB' 'AnonPages: 116868 kB' 'Shmem: 10476 kB' 'KernelStack: 6388 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167724 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.428 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:23.429 node0=512 expecting 512 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:23.429 00:05:23.429 real 0m0.721s 00:05:23.429 user 0m0.340s 00:05:23.429 sys 0m0.432s 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.429 02:52:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:23.429 ************************************ 00:05:23.429 END TEST custom_alloc 00:05:23.429 ************************************ 00:05:23.429 02:52:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:23.429 02:52:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.429 02:52:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.429 02:52:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:23.429 ************************************ 00:05:23.429 START TEST no_shrink_alloc 00:05:23.429 ************************************ 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:23.429 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.430 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:23.430 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:23.430 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:23.430 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.430 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.951 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.951 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.951 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.951 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.951 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:23.951 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:23.951 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6086852 kB' 'MemAvailable: 9488576 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 853248 kB' 'Inactive: 2868560 kB' 'Active(anon): 126784 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117884 kB' 'Mapped: 48208 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167704 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80428 kB' 'KernelStack: 6372 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.952 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6086852 kB' 'MemAvailable: 9488576 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852428 kB' 'Inactive: 2868560 kB' 'Active(anon): 125964 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117072 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167704 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80428 kB' 'KernelStack: 6384 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.953 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.954 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6086600 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852424 kB' 'Inactive: 2868560 kB' 'Active(anon): 125960 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117112 kB' 'Mapped: 48048 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167704 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80428 kB' 'KernelStack: 6384 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.955 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.956 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:23.957 nr_hugepages=1024 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.957 resv_hugepages=0 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.957 surplus_hugepages=0 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.957 anon_hugepages=0 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6086600 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852432 kB' 'Inactive: 2868560 kB' 'Active(anon): 125968 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117112 kB' 'Mapped: 48048 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167704 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80428 kB' 'KernelStack: 6384 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.957 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.958 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6086600 kB' 'MemUsed: 6155372 kB' 'SwapCached: 0 kB' 'Active: 852348 kB' 'Inactive: 2868560 kB' 'Active(anon): 125884 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3605500 kB' 'Mapped: 48048 kB' 'AnonPages: 116988 kB' 'Shmem: 10476 kB' 'KernelStack: 6368 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167704 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.959 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.221 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.222 node0=1024 expecting 1024 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.222 02:52:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.481 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.481 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.481 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.746 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.746 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6084980 kB' 'MemAvailable: 9486704 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852444 kB' 'Inactive: 2868560 kB' 'Active(anon): 125980 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117116 kB' 'Mapped: 48184 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167684 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80408 kB' 'KernelStack: 6328 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.746 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6084728 kB' 'MemAvailable: 9486452 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852556 kB' 'Inactive: 2868560 kB' 'Active(anon): 126092 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117224 kB' 'Mapped: 48184 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167724 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80448 kB' 'KernelStack: 6336 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.747 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.748 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6084728 kB' 'MemAvailable: 9486452 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852472 kB' 'Inactive: 2868560 kB' 'Active(anon): 126008 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117104 kB' 'Mapped: 48056 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167724 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80448 kB' 'KernelStack: 6368 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.749 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.750 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.751 nr_hugepages=1024 00:05:24.751 resv_hugepages=0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.751 surplus_hugepages=0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.751 anon_hugepages=0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6085780 kB' 'MemAvailable: 9487504 kB' 'Buffers: 2696 kB' 'Cached: 3602804 kB' 'SwapCached: 0 kB' 'Active: 852372 kB' 'Inactive: 2868560 kB' 'Active(anon): 125908 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117064 kB' 'Mapped: 48056 kB' 'Shmem: 10476 kB' 'KReclaimable: 87276 kB' 'Slab: 167724 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80448 kB' 'KernelStack: 6384 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.751 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.752 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6086072 kB' 'MemUsed: 6155900 kB' 'SwapCached: 0 kB' 'Active: 852296 kB' 'Inactive: 2868556 kB' 'Active(anon): 125832 kB' 'Inactive(anon): 0 kB' 'Active(file): 726464 kB' 'Inactive(file): 2868556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 3605496 kB' 'Mapped: 48028 kB' 'AnonPages: 117008 kB' 'Shmem: 10476 kB' 'KernelStack: 6368 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167724 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 80448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.753 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.754 node0=1024 expecting 1024 00:05:24.754 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.755 02:52:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.755 00:05:24.755 real 0m1.412s 00:05:24.755 user 0m0.665s 00:05:24.755 sys 0m0.852s 00:05:24.755 02:52:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.755 02:52:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:24.755 ************************************ 00:05:24.755 END TEST no_shrink_alloc 00:05:24.755 ************************************ 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:24.755 02:52:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:24.755 00:05:24.755 real 0m6.268s 00:05:24.755 user 0m2.870s 00:05:24.755 sys 0m3.506s 00:05:24.755 02:52:10 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.755 02:52:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.755 ************************************ 00:05:24.755 END TEST hugepages 00:05:24.755 ************************************ 00:05:25.014 02:52:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.014 02:52:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.014 02:52:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.014 02:52:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.014 ************************************ 00:05:25.014 START TEST driver 00:05:25.014 ************************************ 00:05:25.014 02:52:10 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.014 * Looking for test storage... 00:05:25.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.014 02:52:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:25.014 02:52:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.014 02:52:10 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.602 02:52:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:31.602 02:52:16 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.602 02:52:16 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.602 02:52:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.602 ************************************ 00:05:31.602 START TEST guess_driver 00:05:31.602 ************************************ 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:31.602 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:31.602 Looking for driver=uio_pci_generic 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.602 02:52:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.602 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:31.602 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:31.602 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.170 02:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.170 02:52:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:32.170 02:52:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:32.170 02:52:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.170 02:52:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.738 00:05:38.738 real 0m7.163s 00:05:38.738 user 0m0.803s 00:05:38.738 sys 0m1.456s 00:05:38.738 02:52:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.738 02:52:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:38.738 ************************************ 00:05:38.738 END TEST guess_driver 00:05:38.738 ************************************ 00:05:38.738 00:05:38.738 real 0m13.222s 00:05:38.738 user 0m1.144s 00:05:38.738 sys 0m2.267s 00:05:38.738 02:52:24 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.738 02:52:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:38.738 ************************************ 00:05:38.738 END TEST driver 00:05:38.738 ************************************ 00:05:38.738 02:52:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:38.738 02:52:24 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.738 02:52:24 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.738 02:52:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:38.738 ************************************ 00:05:38.738 START TEST devices 00:05:38.738 ************************************ 00:05:38.738 02:52:24 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:38.738 * Looking for test storage... 00:05:38.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:38.738 02:52:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:38.738 02:52:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:38.738 02:52:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.738 02:52:24 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:39.306 02:52:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:39.306 No valid GPT data, bailing 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:39.306 02:52:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:39.306 02:52:25 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:39.306 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:39.306 02:52:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:39.566 No valid GPT data, bailing 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:39.566 No valid GPT data, bailing 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:39.566 No valid GPT data, bailing 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:39.566 02:52:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:39.566 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:39.566 02:52:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:39.826 No valid GPT data, bailing 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:39.826 02:52:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:39.826 02:52:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:39.826 02:52:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:39.826 No valid GPT data, bailing 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.826 02:52:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:39.826 02:52:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:39.826 02:52:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:39.826 02:52:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:39.826 02:52:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:39.826 02:52:25 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.826 02:52:25 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.826 02:52:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.826 ************************************ 00:05:39.826 START TEST nvme_mount 00:05:39.826 ************************************ 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.826 02:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:40.763 Creating new GPT entries in memory. 00:05:40.763 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.763 other utilities. 00:05:40.763 02:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.763 02:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.763 02:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.763 02:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.763 02:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:42.141 Creating new GPT entries in memory. 00:05:42.141 The operation has completed successfully. 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 72568 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.141 02:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.141 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.141 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:42.141 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:42.141 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.141 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.141 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.400 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.400 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.400 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.400 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.400 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.400 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.659 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.659 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:42.918 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.918 02:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.178 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.178 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.178 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.178 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.178 02:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.438 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.697 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.697 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.697 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.697 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.965 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.965 02:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.232 02:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.491 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.491 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:44.491 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:44.491 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.491 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.492 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.750 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.750 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.750 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.750 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.750 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.750 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.010 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.010 02:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.269 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.270 02:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.270 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.270 00:05:45.270 real 0m5.385s 00:05:45.270 user 0m1.493s 00:05:45.270 sys 0m1.573s 00:05:45.270 02:52:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.270 02:52:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:45.270 ************************************ 00:05:45.270 END TEST nvme_mount 00:05:45.270 ************************************ 00:05:45.270 02:52:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:45.270 02:52:31 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.270 02:52:31 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.270 02:52:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:45.270 ************************************ 00:05:45.270 START TEST dm_mount 00:05:45.270 ************************************ 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.270 02:52:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:46.206 Creating new GPT entries in memory. 00:05:46.206 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.206 other utilities. 00:05:46.206 02:52:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.206 02:52:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.206 02:52:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.206 02:52:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.206 02:52:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.581 Creating new GPT entries in memory. 00:05:47.581 The operation has completed successfully. 00:05:47.581 02:52:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:47.581 02:52:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.581 02:52:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.581 02:52:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.581 02:52:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:48.519 The operation has completed successfully. 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 73200 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.519 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.777 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.777 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.777 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.777 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.777 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.777 02:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.035 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.035 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.294 02:52:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.553 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.553 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:49.553 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:49.553 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.553 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.553 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.812 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.812 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.812 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.812 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.812 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.812 02:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.071 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.071 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.330 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.330 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:50.330 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:50.330 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:50.330 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:50.331 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:50.331 00:05:50.331 real 0m5.151s 00:05:50.331 user 0m1.026s 00:05:50.331 sys 0m1.058s 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.331 02:52:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:50.331 ************************************ 00:05:50.331 END TEST dm_mount 00:05:50.331 ************************************ 00:05:50.331 02:52:36 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:50.331 02:52:36 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:50.331 02:52:36 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.331 02:52:36 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.331 02:52:36 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:50.589 02:52:36 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:50.589 02:52:36 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:50.848 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.848 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.848 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:50.848 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:50.848 02:52:36 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:50.848 00:05:50.848 real 0m12.585s 00:05:50.848 user 0m3.447s 00:05:50.848 sys 0m3.440s 00:05:50.848 02:52:36 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.848 ************************************ 00:05:50.848 END TEST devices 00:05:50.848 ************************************ 00:05:50.848 02:52:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:50.848 00:05:50.848 real 0m44.460s 00:05:50.848 user 0m10.679s 00:05:50.848 sys 0m13.395s 00:05:50.848 02:52:36 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.848 02:52:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:50.848 ************************************ 00:05:50.848 END TEST setup.sh 00:05:50.848 ************************************ 00:05:50.848 02:52:36 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:51.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.673 Hugepages 00:05:51.673 node hugesize free / total 00:05:51.673 node0 1048576kB 0 / 0 00:05:51.673 node0 2048kB 2048 / 2048 00:05:51.673 00:05:51.673 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:51.932 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:51.932 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:51.932 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:52.190 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:52.190 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:52.190 02:52:38 -- spdk/autotest.sh@130 -- # uname -s 00:05:52.190 02:52:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:52.190 02:52:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:52.190 02:52:38 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:52.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.323 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.323 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.323 02:52:39 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:54.259 02:52:40 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:54.259 02:52:40 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:54.259 02:52:40 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:54.259 02:52:40 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:54.259 02:52:40 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:54.259 02:52:40 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:54.259 02:52:40 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.259 02:52:40 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:54.259 02:52:40 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:54.518 02:52:40 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:54.518 02:52:40 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:54.518 02:52:40 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.035 Waiting for block devices as requested 00:05:55.035 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.293 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.293 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:00.591 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:00.591 02:52:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:00.591 02:52:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:00.591 02:52:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:00.592 02:52:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:00.592 02:52:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:00.592 02:52:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1553 -- # continue 00:06:00.592 02:52:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:00.592 02:52:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1553 -- # continue 00:06:00.592 02:52:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:00.592 02:52:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1553 -- # continue 00:06:00.592 02:52:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:06:00.592 02:52:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:00.592 02:52:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:00.592 02:52:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:00.592 02:52:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:00.592 02:52:46 -- common/autotest_common.sh@1553 -- # continue 00:06:00.592 02:52:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:00.592 02:52:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.592 02:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.592 02:52:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:00.592 02:52:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:00.592 02:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.592 02:52:46 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.729 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.729 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.729 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.729 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.729 02:52:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:01.729 02:52:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.729 02:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 02:52:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:01.729 02:52:47 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:01.729 02:52:47 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:01.729 02:52:47 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:01.729 02:52:47 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:01.729 02:52:47 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:01.729 02:52:47 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:01.729 02:52:47 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:01.729 02:52:47 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.729 02:52:47 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.729 02:52:47 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:01.729 02:52:47 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:01.729 02:52:47 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:01.730 02:52:47 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:01.730 02:52:47 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:01.989 02:52:47 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:01.989 02:52:47 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:01.989 02:52:47 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:01.989 02:52:47 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:01.989 02:52:47 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:01.989 02:52:47 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:01.989 02:52:47 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:01.989 02:52:47 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:01.989 02:52:47 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:01.989 02:52:47 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:01.989 02:52:47 -- common/autotest_common.sh@1589 -- # return 0 00:06:01.989 02:52:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:01.989 02:52:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:01.989 02:52:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:01.990 02:52:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:01.990 02:52:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:01.990 02:52:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:01.990 02:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:01.990 02:52:47 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:01.990 02:52:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.990 02:52:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.990 02:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:01.990 ************************************ 00:06:01.990 START TEST env 00:06:01.990 ************************************ 00:06:01.990 02:52:47 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:01.990 * Looking for test storage... 00:06:01.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:01.990 02:52:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:01.990 02:52:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.990 02:52:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.990 02:52:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.990 ************************************ 00:06:01.990 START TEST env_memory 00:06:01.990 ************************************ 00:06:01.990 02:52:47 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:01.990 00:06:01.990 00:06:01.990 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.990 http://cunit.sourceforge.net/ 00:06:01.990 00:06:01.990 00:06:01.990 Suite: memory 00:06:01.990 Test: alloc and free memory map ...[2024-05-14 02:52:47.981492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:02.249 passed 00:06:02.249 Test: mem map translation ...[2024-05-14 02:52:48.056735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:02.249 [2024-05-14 02:52:48.056840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:02.249 [2024-05-14 02:52:48.056942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:02.249 [2024-05-14 02:52:48.056974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:02.249 passed 00:06:02.249 Test: mem map registration ...[2024-05-14 02:52:48.155328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:02.249 [2024-05-14 02:52:48.155461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:02.249 passed 00:06:02.509 Test: mem map adjacent registrations ...passed 00:06:02.509 00:06:02.509 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.509 suites 1 1 n/a 0 0 00:06:02.509 tests 4 4 4 0 0 00:06:02.509 asserts 152 152 152 0 n/a 00:06:02.509 00:06:02.509 Elapsed time = 0.383 seconds 00:06:02.509 00:06:02.509 real 0m0.420s 00:06:02.509 user 0m0.389s 00:06:02.509 sys 0m0.028s 00:06:02.509 02:52:48 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.509 ************************************ 00:06:02.509 END TEST env_memory 00:06:02.509 02:52:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:02.509 ************************************ 00:06:02.509 02:52:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:02.509 02:52:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.509 02:52:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.509 02:52:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.509 ************************************ 00:06:02.509 START TEST env_vtophys 00:06:02.509 ************************************ 00:06:02.509 02:52:48 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:02.509 EAL: lib.eal log level changed from notice to debug 00:06:02.509 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 1 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 2 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 3 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 4 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 5 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 6 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 7 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 8 as core 0 on socket 0 00:06:02.509 EAL: Detected lcore 9 as core 0 on socket 0 00:06:02.509 EAL: Maximum logical cores by configuration: 128 00:06:02.509 EAL: Detected CPU lcores: 10 00:06:02.509 EAL: Detected NUMA nodes: 1 00:06:02.509 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:02.509 EAL: Detected shared linkage of DPDK 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:02.509 EAL: Registered [vdev] bus. 00:06:02.509 EAL: bus.vdev log level changed from disabled to notice 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:02.509 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:02.509 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:02.509 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:02.509 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.509 EAL: No shared files mode enabled, IPC is disabled 00:06:02.509 EAL: Selected IOVA mode 'PA' 00:06:02.509 EAL: Probing VFIO support... 00:06:02.509 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:02.509 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:02.509 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.509 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.509 EAL: Setting up physically contiguous memory... 00:06:02.509 EAL: Setting maximum number of open files to 524288 00:06:02.509 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.509 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.509 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.509 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.509 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.509 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.509 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.509 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.509 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.509 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.509 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.509 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.509 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.509 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.509 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.509 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.509 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.509 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.509 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.509 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.509 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.509 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.509 EAL: Hugepages will be freed exactly as allocated. 00:06:02.509 EAL: No shared files mode enabled, IPC is disabled 00:06:02.509 EAL: No shared files mode enabled, IPC is disabled 00:06:02.768 EAL: TSC frequency is ~2200000 KHz 00:06:02.768 EAL: Main lcore 0 is ready (tid=7f24b585fa40;cpuset=[0]) 00:06:02.768 EAL: Trying to obtain current memory policy. 00:06:02.768 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.768 EAL: Restoring previous memory policy: 0 00:06:02.768 EAL: request: mp_malloc_sync 00:06:02.768 EAL: No shared files mode enabled, IPC is disabled 00:06:02.768 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.768 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:02.768 EAL: No shared files mode enabled, IPC is disabled 00:06:02.768 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.769 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.769 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:02.769 00:06:02.769 00:06:02.769 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.769 http://cunit.sourceforge.net/ 00:06:02.769 00:06:02.769 00:06:02.769 Suite: components_suite 00:06:03.028 Test: vtophys_malloc_test ...passed 00:06:03.028 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 4MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 4MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 6MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 6MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 10MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 10MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 18MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 18MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 34MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 34MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 66MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 66MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.028 EAL: Restoring previous memory policy: 4 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was expanded by 130MB 00:06:03.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.028 EAL: request: mp_malloc_sync 00:06:03.028 EAL: No shared files mode enabled, IPC is disabled 00:06:03.028 EAL: Heap on socket 0 was shrunk by 130MB 00:06:03.028 EAL: Trying to obtain current memory policy. 00:06:03.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.287 EAL: Restoring previous memory policy: 4 00:06:03.287 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.287 EAL: request: mp_malloc_sync 00:06:03.287 EAL: No shared files mode enabled, IPC is disabled 00:06:03.287 EAL: Heap on socket 0 was expanded by 258MB 00:06:03.287 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.287 EAL: request: mp_malloc_sync 00:06:03.287 EAL: No shared files mode enabled, IPC is disabled 00:06:03.287 EAL: Heap on socket 0 was shrunk by 258MB 00:06:03.287 EAL: Trying to obtain current memory policy. 00:06:03.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.287 EAL: Restoring previous memory policy: 4 00:06:03.287 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.287 EAL: request: mp_malloc_sync 00:06:03.287 EAL: No shared files mode enabled, IPC is disabled 00:06:03.287 EAL: Heap on socket 0 was expanded by 514MB 00:06:03.287 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.547 EAL: request: mp_malloc_sync 00:06:03.547 EAL: No shared files mode enabled, IPC is disabled 00:06:03.547 EAL: Heap on socket 0 was shrunk by 514MB 00:06:03.547 EAL: Trying to obtain current memory policy. 00:06:03.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.547 EAL: Restoring previous memory policy: 4 00:06:03.547 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.547 EAL: request: mp_malloc_sync 00:06:03.547 EAL: No shared files mode enabled, IPC is disabled 00:06:03.547 EAL: Heap on socket 0 was expanded by 1026MB 00:06:03.806 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.806 passed 00:06:03.806 00:06:03.806 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.806 suites 1 1 n/a 0 0 00:06:03.806 tests 2 2 2 0 0 00:06:03.806 asserts 5428 5428 5428 0 n/a 00:06:03.806 00:06:03.806 Elapsed time = 1.103 secondsEAL: request: mp_malloc_sync 00:06:03.806 EAL: No shared files mode enabled, IPC is disabled 00:06:03.806 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.806 00:06:03.806 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.806 EAL: request: mp_malloc_sync 00:06:03.806 EAL: No shared files mode enabled, IPC is disabled 00:06:03.806 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.806 EAL: No shared files mode enabled, IPC is disabled 00:06:03.806 EAL: No shared files mode enabled, IPC is disabled 00:06:03.806 EAL: No shared files mode enabled, IPC is disabled 00:06:03.806 00:06:03.806 real 0m1.375s 00:06:03.806 user 0m0.618s 00:06:03.806 sys 0m0.618s 00:06:03.806 02:52:49 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.806 ************************************ 00:06:03.806 END TEST env_vtophys 00:06:03.806 02:52:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:03.806 ************************************ 00:06:03.806 02:52:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:03.806 02:52:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.806 02:52:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.806 02:52:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.806 ************************************ 00:06:03.806 START TEST env_pci 00:06:03.806 ************************************ 00:06:03.806 02:52:49 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:03.807 00:06:03.807 00:06:03.807 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.807 http://cunit.sourceforge.net/ 00:06:03.807 00:06:03.807 00:06:03.807 Suite: pci 00:06:03.807 Test: pci_hook ...[2024-05-14 02:52:49.812524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74962 has claimed it 00:06:04.066 passed 00:06:04.066 00:06:04.066 EAL: Cannot find device (10000:00:01.0) 00:06:04.066 EAL: Failed to attach device on primary process 00:06:04.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.066 suites 1 1 n/a 0 0 00:06:04.066 tests 1 1 1 0 0 00:06:04.066 asserts 25 25 25 0 n/a 00:06:04.066 00:06:04.066 Elapsed time = 0.007 seconds 00:06:04.066 00:06:04.066 real 0m0.068s 00:06:04.066 user 0m0.034s 00:06:04.066 sys 0m0.034s 00:06:04.066 02:52:49 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.066 ************************************ 00:06:04.066 END TEST env_pci 00:06:04.066 ************************************ 00:06:04.066 02:52:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:04.066 02:52:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:04.066 02:52:49 env -- env/env.sh@15 -- # uname 00:06:04.066 02:52:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:04.066 02:52:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:04.066 02:52:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.066 02:52:49 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:04.066 02:52:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.066 02:52:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.066 ************************************ 00:06:04.066 START TEST env_dpdk_post_init 00:06:04.066 ************************************ 00:06:04.066 02:52:49 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.066 EAL: Detected CPU lcores: 10 00:06:04.066 EAL: Detected NUMA nodes: 1 00:06:04.066 EAL: Detected shared linkage of DPDK 00:06:04.066 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.066 EAL: Selected IOVA mode 'PA' 00:06:04.325 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.325 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:04.325 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:04.325 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:04.325 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:04.325 Starting DPDK initialization... 00:06:04.325 Starting SPDK post initialization... 00:06:04.325 SPDK NVMe probe 00:06:04.325 Attaching to 0000:00:10.0 00:06:04.325 Attaching to 0000:00:11.0 00:06:04.325 Attaching to 0000:00:12.0 00:06:04.325 Attaching to 0000:00:13.0 00:06:04.325 Attached to 0000:00:10.0 00:06:04.325 Attached to 0000:00:11.0 00:06:04.325 Attached to 0000:00:13.0 00:06:04.325 Attached to 0000:00:12.0 00:06:04.325 Cleaning up... 00:06:04.325 00:06:04.325 real 0m0.275s 00:06:04.325 user 0m0.093s 00:06:04.325 sys 0m0.083s 00:06:04.325 02:52:50 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.325 ************************************ 00:06:04.325 02:52:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.325 END TEST env_dpdk_post_init 00:06:04.325 ************************************ 00:06:04.325 02:52:50 env -- env/env.sh@26 -- # uname 00:06:04.325 02:52:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:04.325 02:52:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.325 02:52:50 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.325 02:52:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.325 02:52:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.325 ************************************ 00:06:04.325 START TEST env_mem_callbacks 00:06:04.325 ************************************ 00:06:04.325 02:52:50 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.325 EAL: Detected CPU lcores: 10 00:06:04.325 EAL: Detected NUMA nodes: 1 00:06:04.325 EAL: Detected shared linkage of DPDK 00:06:04.325 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.325 EAL: Selected IOVA mode 'PA' 00:06:04.584 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.584 00:06:04.585 00:06:04.585 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.585 http://cunit.sourceforge.net/ 00:06:04.585 00:06:04.585 00:06:04.585 Suite: memory 00:06:04.585 Test: test ... 00:06:04.585 register 0x200000200000 2097152 00:06:04.585 malloc 3145728 00:06:04.585 register 0x200000400000 4194304 00:06:04.585 buf 0x200000500000 len 3145728 PASSED 00:06:04.585 malloc 64 00:06:04.585 buf 0x2000004fff40 len 64 PASSED 00:06:04.585 malloc 4194304 00:06:04.585 register 0x200000800000 6291456 00:06:04.585 buf 0x200000a00000 len 4194304 PASSED 00:06:04.585 free 0x200000500000 3145728 00:06:04.585 free 0x2000004fff40 64 00:06:04.585 unregister 0x200000400000 4194304 PASSED 00:06:04.585 free 0x200000a00000 4194304 00:06:04.585 unregister 0x200000800000 6291456 PASSED 00:06:04.585 malloc 8388608 00:06:04.585 register 0x200000400000 10485760 00:06:04.585 buf 0x200000600000 len 8388608 PASSED 00:06:04.585 free 0x200000600000 8388608 00:06:04.585 unregister 0x200000400000 10485760 PASSED 00:06:04.585 passed 00:06:04.585 00:06:04.585 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.585 suites 1 1 n/a 0 0 00:06:04.585 tests 1 1 1 0 0 00:06:04.585 asserts 15 15 15 0 n/a 00:06:04.585 00:06:04.585 Elapsed time = 0.010 seconds 00:06:04.585 00:06:04.585 real 0m0.199s 00:06:04.585 user 0m0.046s 00:06:04.585 sys 0m0.051s 00:06:04.585 02:52:50 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.585 ************************************ 00:06:04.585 END TEST env_mem_callbacks 00:06:04.585 02:52:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:04.585 ************************************ 00:06:04.585 00:06:04.585 real 0m2.688s 00:06:04.585 user 0m1.295s 00:06:04.585 sys 0m1.027s 00:06:04.585 02:52:50 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.585 02:52:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.585 ************************************ 00:06:04.585 END TEST env 00:06:04.585 ************************************ 00:06:04.585 02:52:50 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:04.585 02:52:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.585 02:52:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.585 02:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:04.585 ************************************ 00:06:04.585 START TEST rpc 00:06:04.585 ************************************ 00:06:04.585 02:52:50 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:04.585 * Looking for test storage... 00:06:04.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.844 02:52:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=75077 00:06:04.844 02:52:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:04.844 02:52:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.844 02:52:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 75077 00:06:04.844 02:52:50 rpc -- common/autotest_common.sh@827 -- # '[' -z 75077 ']' 00:06:04.845 02:52:50 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.845 02:52:50 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.845 02:52:50 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.845 02:52:50 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.845 02:52:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.845 [2024-05-14 02:52:50.785782] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:04.845 [2024-05-14 02:52:50.786006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75077 ] 00:06:05.104 [2024-05-14 02:52:50.940448] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.104 [2024-05-14 02:52:50.961841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.104 [2024-05-14 02:52:51.003849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.104 [2024-05-14 02:52:51.003925] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 75077' to capture a snapshot of events at runtime. 00:06:05.104 [2024-05-14 02:52:51.003950] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.104 [2024-05-14 02:52:51.003965] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.104 [2024-05-14 02:52:51.003987] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid75077 for offline analysis/debug. 00:06:05.104 [2024-05-14 02:52:51.004025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.671 02:52:51 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.671 02:52:51 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:05.671 02:52:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.671 02:52:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.671 02:52:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.671 02:52:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.671 02:52:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.671 02:52:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.671 02:52:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 ************************************ 00:06:05.931 START TEST rpc_integrity 00:06:05.931 ************************************ 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.931 { 00:06:05.931 "name": "Malloc0", 00:06:05.931 "aliases": [ 00:06:05.931 "5a60f7bb-bbae-4ba8-bf57-37ddb07826b0" 00:06:05.931 ], 00:06:05.931 "product_name": "Malloc disk", 00:06:05.931 "block_size": 512, 00:06:05.931 "num_blocks": 16384, 00:06:05.931 "uuid": "5a60f7bb-bbae-4ba8-bf57-37ddb07826b0", 00:06:05.931 "assigned_rate_limits": { 00:06:05.931 "rw_ios_per_sec": 0, 00:06:05.931 "rw_mbytes_per_sec": 0, 00:06:05.931 "r_mbytes_per_sec": 0, 00:06:05.931 "w_mbytes_per_sec": 0 00:06:05.931 }, 00:06:05.931 "claimed": false, 00:06:05.931 "zoned": false, 00:06:05.931 "supported_io_types": { 00:06:05.931 "read": true, 00:06:05.931 "write": true, 00:06:05.931 "unmap": true, 00:06:05.931 "write_zeroes": true, 00:06:05.931 "flush": true, 00:06:05.931 "reset": true, 00:06:05.931 "compare": false, 00:06:05.931 "compare_and_write": false, 00:06:05.931 "abort": true, 00:06:05.931 "nvme_admin": false, 00:06:05.931 "nvme_io": false 00:06:05.931 }, 00:06:05.931 "memory_domains": [ 00:06:05.931 { 00:06:05.931 "dma_device_id": "system", 00:06:05.931 "dma_device_type": 1 00:06:05.931 }, 00:06:05.931 { 00:06:05.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.931 "dma_device_type": 2 00:06:05.931 } 00:06:05.931 ], 00:06:05.931 "driver_specific": {} 00:06:05.931 } 00:06:05.931 ]' 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 [2024-05-14 02:52:51.849577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.931 [2024-05-14 02:52:51.849685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.931 [2024-05-14 02:52:51.849735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:05.931 [2024-05-14 02:52:51.849753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.931 [2024-05-14 02:52:51.852540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.931 [2024-05-14 02:52:51.852597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.931 Passthru0 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.931 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.931 { 00:06:05.931 "name": "Malloc0", 00:06:05.931 "aliases": [ 00:06:05.931 "5a60f7bb-bbae-4ba8-bf57-37ddb07826b0" 00:06:05.931 ], 00:06:05.931 "product_name": "Malloc disk", 00:06:05.931 "block_size": 512, 00:06:05.931 "num_blocks": 16384, 00:06:05.931 "uuid": "5a60f7bb-bbae-4ba8-bf57-37ddb07826b0", 00:06:05.931 "assigned_rate_limits": { 00:06:05.931 "rw_ios_per_sec": 0, 00:06:05.931 "rw_mbytes_per_sec": 0, 00:06:05.931 "r_mbytes_per_sec": 0, 00:06:05.931 "w_mbytes_per_sec": 0 00:06:05.931 }, 00:06:05.931 "claimed": true, 00:06:05.931 "claim_type": "exclusive_write", 00:06:05.931 "zoned": false, 00:06:05.931 "supported_io_types": { 00:06:05.931 "read": true, 00:06:05.931 "write": true, 00:06:05.931 "unmap": true, 00:06:05.931 "write_zeroes": true, 00:06:05.931 "flush": true, 00:06:05.931 "reset": true, 00:06:05.931 "compare": false, 00:06:05.931 "compare_and_write": false, 00:06:05.931 "abort": true, 00:06:05.931 "nvme_admin": false, 00:06:05.931 "nvme_io": false 00:06:05.931 }, 00:06:05.931 "memory_domains": [ 00:06:05.931 { 00:06:05.931 "dma_device_id": "system", 00:06:05.931 "dma_device_type": 1 00:06:05.931 }, 00:06:05.931 { 00:06:05.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.931 "dma_device_type": 2 00:06:05.931 } 00:06:05.931 ], 00:06:05.931 "driver_specific": {} 00:06:05.931 }, 00:06:05.931 { 00:06:05.931 "name": "Passthru0", 00:06:05.931 "aliases": [ 00:06:05.931 "45d71ce5-1646-5061-a8d6-351c84b03f8a" 00:06:05.931 ], 00:06:05.931 "product_name": "passthru", 00:06:05.931 "block_size": 512, 00:06:05.931 "num_blocks": 16384, 00:06:05.931 "uuid": "45d71ce5-1646-5061-a8d6-351c84b03f8a", 00:06:05.931 "assigned_rate_limits": { 00:06:05.931 "rw_ios_per_sec": 0, 00:06:05.931 "rw_mbytes_per_sec": 0, 00:06:05.931 "r_mbytes_per_sec": 0, 00:06:05.931 "w_mbytes_per_sec": 0 00:06:05.931 }, 00:06:05.931 "claimed": false, 00:06:05.931 "zoned": false, 00:06:05.931 "supported_io_types": { 00:06:05.931 "read": true, 00:06:05.931 "write": true, 00:06:05.931 "unmap": true, 00:06:05.931 "write_zeroes": true, 00:06:05.931 "flush": true, 00:06:05.931 "reset": true, 00:06:05.931 "compare": false, 00:06:05.931 "compare_and_write": false, 00:06:05.931 "abort": true, 00:06:05.931 "nvme_admin": false, 00:06:05.931 "nvme_io": false 00:06:05.931 }, 00:06:05.931 "memory_domains": [ 00:06:05.931 { 00:06:05.931 "dma_device_id": "system", 00:06:05.931 "dma_device_type": 1 00:06:05.932 }, 00:06:05.932 { 00:06:05.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.932 "dma_device_type": 2 00:06:05.932 } 00:06:05.932 ], 00:06:05.932 "driver_specific": { 00:06:05.932 "passthru": { 00:06:05.932 "name": "Passthru0", 00:06:05.932 "base_bdev_name": "Malloc0" 00:06:05.932 } 00:06:05.932 } 00:06:05.932 } 00:06:05.932 ]' 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.932 02:52:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.932 02:52:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.190 02:52:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.190 00:06:06.190 real 0m0.306s 00:06:06.190 user 0m0.205s 00:06:06.190 sys 0m0.035s 00:06:06.190 02:52:52 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.190 02:52:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.190 ************************************ 00:06:06.190 END TEST rpc_integrity 00:06:06.190 ************************************ 00:06:06.190 02:52:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:06.190 02:52:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.190 02:52:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.190 02:52:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.190 ************************************ 00:06:06.190 START TEST rpc_plugins 00:06:06.190 ************************************ 00:06:06.190 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:06.190 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:06.190 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.190 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.190 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.190 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:06.190 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:06.190 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:06.191 { 00:06:06.191 "name": "Malloc1", 00:06:06.191 "aliases": [ 00:06:06.191 "3ec3e84b-bf51-4b01-add9-514d00cf0264" 00:06:06.191 ], 00:06:06.191 "product_name": "Malloc disk", 00:06:06.191 "block_size": 4096, 00:06:06.191 "num_blocks": 256, 00:06:06.191 "uuid": "3ec3e84b-bf51-4b01-add9-514d00cf0264", 00:06:06.191 "assigned_rate_limits": { 00:06:06.191 "rw_ios_per_sec": 0, 00:06:06.191 "rw_mbytes_per_sec": 0, 00:06:06.191 "r_mbytes_per_sec": 0, 00:06:06.191 "w_mbytes_per_sec": 0 00:06:06.191 }, 00:06:06.191 "claimed": false, 00:06:06.191 "zoned": false, 00:06:06.191 "supported_io_types": { 00:06:06.191 "read": true, 00:06:06.191 "write": true, 00:06:06.191 "unmap": true, 00:06:06.191 "write_zeroes": true, 00:06:06.191 "flush": true, 00:06:06.191 "reset": true, 00:06:06.191 "compare": false, 00:06:06.191 "compare_and_write": false, 00:06:06.191 "abort": true, 00:06:06.191 "nvme_admin": false, 00:06:06.191 "nvme_io": false 00:06:06.191 }, 00:06:06.191 "memory_domains": [ 00:06:06.191 { 00:06:06.191 "dma_device_id": "system", 00:06:06.191 "dma_device_type": 1 00:06:06.191 }, 00:06:06.191 { 00:06:06.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.191 "dma_device_type": 2 00:06:06.191 } 00:06:06.191 ], 00:06:06.191 "driver_specific": {} 00:06:06.191 } 00:06:06.191 ]' 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:06.191 02:52:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.191 00:06:06.191 real 0m0.150s 00:06:06.191 user 0m0.096s 00:06:06.191 sys 0m0.019s 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.191 ************************************ 00:06:06.191 END TEST rpc_plugins 00:06:06.191 ************************************ 00:06:06.191 02:52:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.450 02:52:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.450 02:52:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.450 02:52:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.450 02:52:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.450 ************************************ 00:06:06.450 START TEST rpc_trace_cmd_test 00:06:06.450 ************************************ 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:06.450 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid75077", 00:06:06.450 "tpoint_group_mask": "0x8", 00:06:06.450 "iscsi_conn": { 00:06:06.450 "mask": "0x2", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "scsi": { 00:06:06.450 "mask": "0x4", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "bdev": { 00:06:06.450 "mask": "0x8", 00:06:06.450 "tpoint_mask": "0xffffffffffffffff" 00:06:06.450 }, 00:06:06.450 "nvmf_rdma": { 00:06:06.450 "mask": "0x10", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "nvmf_tcp": { 00:06:06.450 "mask": "0x20", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "ftl": { 00:06:06.450 "mask": "0x40", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "blobfs": { 00:06:06.450 "mask": "0x80", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "dsa": { 00:06:06.450 "mask": "0x200", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "thread": { 00:06:06.450 "mask": "0x400", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "nvme_pcie": { 00:06:06.450 "mask": "0x800", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "iaa": { 00:06:06.450 "mask": "0x1000", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "nvme_tcp": { 00:06:06.450 "mask": "0x2000", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "bdev_nvme": { 00:06:06.450 "mask": "0x4000", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 }, 00:06:06.450 "sock": { 00:06:06.450 "mask": "0x8000", 00:06:06.450 "tpoint_mask": "0x0" 00:06:06.450 } 00:06:06.450 }' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:06.450 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:06.710 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:06.710 02:52:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:06.710 00:06:06.710 real 0m0.268s 00:06:06.710 user 0m0.233s 00:06:06.710 sys 0m0.027s 00:06:06.710 02:52:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.710 ************************************ 00:06:06.710 END TEST rpc_trace_cmd_test 00:06:06.710 ************************************ 00:06:06.710 02:52:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.710 02:52:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:06.710 02:52:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:06.710 02:52:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:06.710 02:52:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.710 02:52:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.710 02:52:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.710 ************************************ 00:06:06.710 START TEST rpc_daemon_integrity 00:06:06.710 ************************************ 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.710 { 00:06:06.710 "name": "Malloc2", 00:06:06.710 "aliases": [ 00:06:06.710 "eaef16a3-0729-46e4-8c7c-e3c1eee42877" 00:06:06.710 ], 00:06:06.710 "product_name": "Malloc disk", 00:06:06.710 "block_size": 512, 00:06:06.710 "num_blocks": 16384, 00:06:06.710 "uuid": "eaef16a3-0729-46e4-8c7c-e3c1eee42877", 00:06:06.710 "assigned_rate_limits": { 00:06:06.710 "rw_ios_per_sec": 0, 00:06:06.710 "rw_mbytes_per_sec": 0, 00:06:06.710 "r_mbytes_per_sec": 0, 00:06:06.710 "w_mbytes_per_sec": 0 00:06:06.710 }, 00:06:06.710 "claimed": false, 00:06:06.710 "zoned": false, 00:06:06.710 "supported_io_types": { 00:06:06.710 "read": true, 00:06:06.710 "write": true, 00:06:06.710 "unmap": true, 00:06:06.710 "write_zeroes": true, 00:06:06.710 "flush": true, 00:06:06.710 "reset": true, 00:06:06.710 "compare": false, 00:06:06.710 "compare_and_write": false, 00:06:06.710 "abort": true, 00:06:06.710 "nvme_admin": false, 00:06:06.710 "nvme_io": false 00:06:06.710 }, 00:06:06.710 "memory_domains": [ 00:06:06.710 { 00:06:06.710 "dma_device_id": "system", 00:06:06.710 "dma_device_type": 1 00:06:06.710 }, 00:06:06.710 { 00:06:06.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.710 "dma_device_type": 2 00:06:06.710 } 00:06:06.710 ], 00:06:06.710 "driver_specific": {} 00:06:06.710 } 00:06:06.710 ]' 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.710 [2024-05-14 02:52:52.714109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:06.710 [2024-05-14 02:52:52.714205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.710 [2024-05-14 02:52:52.714237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:06.710 [2024-05-14 02:52:52.714252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.710 [2024-05-14 02:52:52.717000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.710 [2024-05-14 02:52:52.717056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.710 Passthru0 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.710 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.970 { 00:06:06.970 "name": "Malloc2", 00:06:06.970 "aliases": [ 00:06:06.970 "eaef16a3-0729-46e4-8c7c-e3c1eee42877" 00:06:06.970 ], 00:06:06.970 "product_name": "Malloc disk", 00:06:06.970 "block_size": 512, 00:06:06.970 "num_blocks": 16384, 00:06:06.970 "uuid": "eaef16a3-0729-46e4-8c7c-e3c1eee42877", 00:06:06.970 "assigned_rate_limits": { 00:06:06.970 "rw_ios_per_sec": 0, 00:06:06.970 "rw_mbytes_per_sec": 0, 00:06:06.970 "r_mbytes_per_sec": 0, 00:06:06.970 "w_mbytes_per_sec": 0 00:06:06.970 }, 00:06:06.970 "claimed": true, 00:06:06.970 "claim_type": "exclusive_write", 00:06:06.970 "zoned": false, 00:06:06.970 "supported_io_types": { 00:06:06.970 "read": true, 00:06:06.970 "write": true, 00:06:06.970 "unmap": true, 00:06:06.970 "write_zeroes": true, 00:06:06.970 "flush": true, 00:06:06.970 "reset": true, 00:06:06.970 "compare": false, 00:06:06.970 "compare_and_write": false, 00:06:06.970 "abort": true, 00:06:06.970 "nvme_admin": false, 00:06:06.970 "nvme_io": false 00:06:06.970 }, 00:06:06.970 "memory_domains": [ 00:06:06.970 { 00:06:06.970 "dma_device_id": "system", 00:06:06.970 "dma_device_type": 1 00:06:06.970 }, 00:06:06.970 { 00:06:06.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.970 "dma_device_type": 2 00:06:06.970 } 00:06:06.970 ], 00:06:06.970 "driver_specific": {} 00:06:06.970 }, 00:06:06.970 { 00:06:06.970 "name": "Passthru0", 00:06:06.970 "aliases": [ 00:06:06.970 "18432e64-3883-5584-9635-8f1b4be67efd" 00:06:06.970 ], 00:06:06.970 "product_name": "passthru", 00:06:06.970 "block_size": 512, 00:06:06.970 "num_blocks": 16384, 00:06:06.970 "uuid": "18432e64-3883-5584-9635-8f1b4be67efd", 00:06:06.970 "assigned_rate_limits": { 00:06:06.970 "rw_ios_per_sec": 0, 00:06:06.970 "rw_mbytes_per_sec": 0, 00:06:06.970 "r_mbytes_per_sec": 0, 00:06:06.970 "w_mbytes_per_sec": 0 00:06:06.970 }, 00:06:06.970 "claimed": false, 00:06:06.970 "zoned": false, 00:06:06.970 "supported_io_types": { 00:06:06.970 "read": true, 00:06:06.970 "write": true, 00:06:06.970 "unmap": true, 00:06:06.970 "write_zeroes": true, 00:06:06.970 "flush": true, 00:06:06.970 "reset": true, 00:06:06.970 "compare": false, 00:06:06.970 "compare_and_write": false, 00:06:06.970 "abort": true, 00:06:06.970 "nvme_admin": false, 00:06:06.970 "nvme_io": false 00:06:06.970 }, 00:06:06.970 "memory_domains": [ 00:06:06.970 { 00:06:06.970 "dma_device_id": "system", 00:06:06.970 "dma_device_type": 1 00:06:06.970 }, 00:06:06.970 { 00:06:06.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.970 "dma_device_type": 2 00:06:06.970 } 00:06:06.970 ], 00:06:06.970 "driver_specific": { 00:06:06.970 "passthru": { 00:06:06.970 "name": "Passthru0", 00:06:06.970 "base_bdev_name": "Malloc2" 00:06:06.970 } 00:06:06.970 } 00:06:06.970 } 00:06:06.970 ]' 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.970 00:06:06.970 real 0m0.285s 00:06:06.970 user 0m0.191s 00:06:06.970 sys 0m0.031s 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.970 ************************************ 00:06:06.970 END TEST rpc_daemon_integrity 00:06:06.970 02:52:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 ************************************ 00:06:06.970 02:52:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.970 02:52:52 rpc -- rpc/rpc.sh@84 -- # killprocess 75077 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@946 -- # '[' -z 75077 ']' 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@950 -- # kill -0 75077 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@951 -- # uname 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75077 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.970 02:52:52 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.970 killing process with pid 75077 00:06:06.971 02:52:52 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75077' 00:06:06.971 02:52:52 rpc -- common/autotest_common.sh@965 -- # kill 75077 00:06:06.971 02:52:52 rpc -- common/autotest_common.sh@970 -- # wait 75077 00:06:07.230 00:06:07.230 real 0m2.662s 00:06:07.230 user 0m3.481s 00:06:07.230 sys 0m0.662s 00:06:07.230 02:52:53 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.230 ************************************ 00:06:07.230 02:52:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.230 END TEST rpc 00:06:07.230 ************************************ 00:06:07.230 02:52:53 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:07.230 02:52:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.230 02:52:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.230 02:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:07.230 ************************************ 00:06:07.230 START TEST skip_rpc 00:06:07.230 ************************************ 00:06:07.230 02:52:53 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:07.489 * Looking for test storage... 00:06:07.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.489 02:52:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.489 02:52:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:07.489 02:52:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.489 02:52:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.489 02:52:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.489 02:52:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.489 ************************************ 00:06:07.489 START TEST skip_rpc 00:06:07.489 ************************************ 00:06:07.489 02:52:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:07.489 02:52:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=75269 00:06:07.489 02:52:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.489 02:52:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.489 02:52:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:07.489 [2024-05-14 02:52:53.426821] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:07.489 [2024-05-14 02:52:53.426983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75269 ] 00:06:07.747 [2024-05-14 02:52:53.562047] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.747 [2024-05-14 02:52:53.583063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.747 [2024-05-14 02:52:53.620008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 75269 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 75269 ']' 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 75269 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75269 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.015 killing process with pid 75269 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75269' 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 75269 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 75269 00:06:13.015 00:06:13.015 real 0m5.313s 00:06:13.015 user 0m4.986s 00:06:13.015 sys 0m0.230s 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.015 02:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.015 ************************************ 00:06:13.015 END TEST skip_rpc 00:06:13.015 ************************************ 00:06:13.015 02:52:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:13.015 02:52:58 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.015 02:52:58 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.015 02:52:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.015 ************************************ 00:06:13.015 START TEST skip_rpc_with_json 00:06:13.015 ************************************ 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=75357 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 75357 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 75357 ']' 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.015 02:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.015 [2024-05-14 02:52:58.814300] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:13.015 [2024-05-14 02:52:58.814483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75357 ] 00:06:13.015 [2024-05-14 02:52:58.962032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.015 [2024-05-14 02:52:58.977890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.015 [2024-05-14 02:52:59.012365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.951 [2024-05-14 02:52:59.742576] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:13.951 request: 00:06:13.951 { 00:06:13.951 "trtype": "tcp", 00:06:13.951 "method": "nvmf_get_transports", 00:06:13.951 "req_id": 1 00:06:13.951 } 00:06:13.951 Got JSON-RPC error response 00:06:13.951 response: 00:06:13.951 { 00:06:13.951 "code": -19, 00:06:13.951 "message": "No such device" 00:06:13.951 } 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.951 [2024-05-14 02:52:59.754736] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.951 02:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:13.951 { 00:06:13.951 "subsystems": [ 00:06:13.951 { 00:06:13.951 "subsystem": "keyring", 00:06:13.951 "config": [] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "iobuf", 00:06:13.951 "config": [ 00:06:13.951 { 00:06:13.951 "method": "iobuf_set_options", 00:06:13.951 "params": { 00:06:13.951 "small_pool_count": 8192, 00:06:13.951 "large_pool_count": 1024, 00:06:13.951 "small_bufsize": 8192, 00:06:13.951 "large_bufsize": 135168 00:06:13.951 } 00:06:13.951 } 00:06:13.951 ] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "sock", 00:06:13.951 "config": [ 00:06:13.951 { 00:06:13.951 "method": "sock_impl_set_options", 00:06:13.951 "params": { 00:06:13.951 "impl_name": "posix", 00:06:13.951 "recv_buf_size": 2097152, 00:06:13.951 "send_buf_size": 2097152, 00:06:13.951 "enable_recv_pipe": true, 00:06:13.951 "enable_quickack": false, 00:06:13.951 "enable_placement_id": 0, 00:06:13.951 "enable_zerocopy_send_server": true, 00:06:13.951 "enable_zerocopy_send_client": false, 00:06:13.951 "zerocopy_threshold": 0, 00:06:13.951 "tls_version": 0, 00:06:13.951 "enable_ktls": false 00:06:13.951 } 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "method": "sock_impl_set_options", 00:06:13.951 "params": { 00:06:13.951 "impl_name": "ssl", 00:06:13.951 "recv_buf_size": 4096, 00:06:13.951 "send_buf_size": 4096, 00:06:13.951 "enable_recv_pipe": true, 00:06:13.951 "enable_quickack": false, 00:06:13.951 "enable_placement_id": 0, 00:06:13.951 "enable_zerocopy_send_server": true, 00:06:13.951 "enable_zerocopy_send_client": false, 00:06:13.951 "zerocopy_threshold": 0, 00:06:13.951 "tls_version": 0, 00:06:13.951 "enable_ktls": false 00:06:13.951 } 00:06:13.951 } 00:06:13.951 ] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "vmd", 00:06:13.951 "config": [] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "accel", 00:06:13.951 "config": [ 00:06:13.951 { 00:06:13.951 "method": "accel_set_options", 00:06:13.951 "params": { 00:06:13.951 "small_cache_size": 128, 00:06:13.951 "large_cache_size": 16, 00:06:13.951 "task_count": 2048, 00:06:13.951 "sequence_count": 2048, 00:06:13.951 "buf_count": 2048 00:06:13.951 } 00:06:13.951 } 00:06:13.951 ] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "bdev", 00:06:13.951 "config": [ 00:06:13.951 { 00:06:13.951 "method": "bdev_set_options", 00:06:13.951 "params": { 00:06:13.951 "bdev_io_pool_size": 65535, 00:06:13.951 "bdev_io_cache_size": 256, 00:06:13.951 "bdev_auto_examine": true, 00:06:13.951 "iobuf_small_cache_size": 128, 00:06:13.951 "iobuf_large_cache_size": 16 00:06:13.951 } 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "method": "bdev_raid_set_options", 00:06:13.951 "params": { 00:06:13.951 "process_window_size_kb": 1024 00:06:13.951 } 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "method": "bdev_iscsi_set_options", 00:06:13.951 "params": { 00:06:13.951 "timeout_sec": 30 00:06:13.951 } 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "method": "bdev_nvme_set_options", 00:06:13.951 "params": { 00:06:13.951 "action_on_timeout": "none", 00:06:13.951 "timeout_us": 0, 00:06:13.951 "timeout_admin_us": 0, 00:06:13.951 "keep_alive_timeout_ms": 10000, 00:06:13.951 "arbitration_burst": 0, 00:06:13.951 "low_priority_weight": 0, 00:06:13.951 "medium_priority_weight": 0, 00:06:13.951 "high_priority_weight": 0, 00:06:13.951 "nvme_adminq_poll_period_us": 10000, 00:06:13.951 "nvme_ioq_poll_period_us": 0, 00:06:13.951 "io_queue_requests": 0, 00:06:13.951 "delay_cmd_submit": true, 00:06:13.951 "transport_retry_count": 4, 00:06:13.951 "bdev_retry_count": 3, 00:06:13.951 "transport_ack_timeout": 0, 00:06:13.951 "ctrlr_loss_timeout_sec": 0, 00:06:13.951 "reconnect_delay_sec": 0, 00:06:13.951 "fast_io_fail_timeout_sec": 0, 00:06:13.951 "disable_auto_failback": false, 00:06:13.951 "generate_uuids": false, 00:06:13.951 "transport_tos": 0, 00:06:13.951 "nvme_error_stat": false, 00:06:13.951 "rdma_srq_size": 0, 00:06:13.951 "io_path_stat": false, 00:06:13.951 "allow_accel_sequence": false, 00:06:13.951 "rdma_max_cq_size": 0, 00:06:13.951 "rdma_cm_event_timeout_ms": 0, 00:06:13.951 "dhchap_digests": [ 00:06:13.951 "sha256", 00:06:13.951 "sha384", 00:06:13.951 "sha512" 00:06:13.951 ], 00:06:13.951 "dhchap_dhgroups": [ 00:06:13.951 "null", 00:06:13.951 "ffdhe2048", 00:06:13.951 "ffdhe3072", 00:06:13.951 "ffdhe4096", 00:06:13.951 "ffdhe6144", 00:06:13.951 "ffdhe8192" 00:06:13.951 ] 00:06:13.951 } 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "method": "bdev_nvme_set_hotplug", 00:06:13.951 "params": { 00:06:13.951 "period_us": 100000, 00:06:13.951 "enable": false 00:06:13.951 } 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "method": "bdev_wait_for_examine" 00:06:13.951 } 00:06:13.951 ] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "scsi", 00:06:13.951 "config": null 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "scheduler", 00:06:13.951 "config": [ 00:06:13.951 { 00:06:13.951 "method": "framework_set_scheduler", 00:06:13.951 "params": { 00:06:13.951 "name": "static" 00:06:13.951 } 00:06:13.951 } 00:06:13.951 ] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "vhost_scsi", 00:06:13.951 "config": [] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "vhost_blk", 00:06:13.951 "config": [] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "ublk", 00:06:13.951 "config": [] 00:06:13.951 }, 00:06:13.951 { 00:06:13.951 "subsystem": "nbd", 00:06:13.951 "config": [] 00:06:13.951 }, 00:06:13.952 { 00:06:13.952 "subsystem": "nvmf", 00:06:13.952 "config": [ 00:06:13.952 { 00:06:13.952 "method": "nvmf_set_config", 00:06:13.952 "params": { 00:06:13.952 "discovery_filter": "match_any", 00:06:13.952 "admin_cmd_passthru": { 00:06:13.952 "identify_ctrlr": false 00:06:13.952 } 00:06:13.952 } 00:06:13.952 }, 00:06:13.952 { 00:06:13.952 "method": "nvmf_set_max_subsystems", 00:06:13.952 "params": { 00:06:13.952 "max_subsystems": 1024 00:06:13.952 } 00:06:13.952 }, 00:06:13.952 { 00:06:13.952 "method": "nvmf_set_crdt", 00:06:13.952 "params": { 00:06:13.952 "crdt1": 0, 00:06:13.952 "crdt2": 0, 00:06:13.952 "crdt3": 0 00:06:13.952 } 00:06:13.952 }, 00:06:13.952 { 00:06:13.952 "method": "nvmf_create_transport", 00:06:13.952 "params": { 00:06:13.952 "trtype": "TCP", 00:06:13.952 "max_queue_depth": 128, 00:06:13.952 "max_io_qpairs_per_ctrlr": 127, 00:06:13.952 "in_capsule_data_size": 4096, 00:06:13.952 "max_io_size": 131072, 00:06:13.952 "io_unit_size": 131072, 00:06:13.952 "max_aq_depth": 128, 00:06:13.952 "num_shared_buffers": 511, 00:06:13.952 "buf_cache_size": 4294967295, 00:06:13.952 "dif_insert_or_strip": false, 00:06:13.952 "zcopy": false, 00:06:13.952 "c2h_success": true, 00:06:13.952 "sock_priority": 0, 00:06:13.952 "abort_timeout_sec": 1, 00:06:13.952 "ack_timeout": 0, 00:06:13.952 "data_wr_pool_size": 0 00:06:13.952 } 00:06:13.952 } 00:06:13.952 ] 00:06:13.952 }, 00:06:13.952 { 00:06:13.952 "subsystem": "iscsi", 00:06:13.952 "config": [ 00:06:13.952 { 00:06:13.952 "method": "iscsi_set_options", 00:06:13.952 "params": { 00:06:13.952 "node_base": "iqn.2016-06.io.spdk", 00:06:13.952 "max_sessions": 128, 00:06:13.952 "max_connections_per_session": 2, 00:06:13.952 "max_queue_depth": 64, 00:06:13.952 "default_time2wait": 2, 00:06:13.952 "default_time2retain": 20, 00:06:13.952 "first_burst_length": 8192, 00:06:13.952 "immediate_data": true, 00:06:13.952 "allow_duplicated_isid": false, 00:06:13.952 "error_recovery_level": 0, 00:06:13.952 "nop_timeout": 60, 00:06:13.952 "nop_in_interval": 30, 00:06:13.952 "disable_chap": false, 00:06:13.952 "require_chap": false, 00:06:13.952 "mutual_chap": false, 00:06:13.952 "chap_group": 0, 00:06:13.952 "max_large_datain_per_connection": 64, 00:06:13.952 "max_r2t_per_connection": 4, 00:06:13.952 "pdu_pool_size": 36864, 00:06:13.952 "immediate_data_pool_size": 16384, 00:06:13.952 "data_out_pool_size": 2048 00:06:13.952 } 00:06:13.952 } 00:06:13.952 ] 00:06:13.952 } 00:06:13.952 ] 00:06:13.952 } 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 75357 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 75357 ']' 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 75357 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75357 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.952 killing process with pid 75357 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75357' 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 75357 00:06:13.952 02:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 75357 00:06:14.211 02:53:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=75385 00:06:14.211 02:53:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:14.211 02:53:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 75385 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 75385 ']' 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 75385 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75385 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.483 killing process with pid 75385 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75385' 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 75385 00:06:19.483 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 75385 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.742 00:06:19.742 real 0m6.847s 00:06:19.742 user 0m6.618s 00:06:19.742 sys 0m0.610s 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.742 ************************************ 00:06:19.742 END TEST skip_rpc_with_json 00:06:19.742 ************************************ 00:06:19.742 02:53:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.742 02:53:05 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.742 02:53:05 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.742 02:53:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.742 ************************************ 00:06:19.742 START TEST skip_rpc_with_delay 00:06:19.742 ************************************ 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.742 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.743 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:19.743 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.743 [2024-05-14 02:53:05.720414] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.743 [2024-05-14 02:53:05.720594] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:20.001 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:20.001 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.001 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.001 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.001 00:06:20.001 real 0m0.182s 00:06:20.001 user 0m0.100s 00:06:20.001 sys 0m0.079s 00:06:20.001 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.001 02:53:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:20.001 ************************************ 00:06:20.001 END TEST skip_rpc_with_delay 00:06:20.001 ************************************ 00:06:20.001 02:53:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:20.001 02:53:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:20.001 02:53:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:20.001 02:53:05 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.001 02:53:05 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.001 02:53:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.001 ************************************ 00:06:20.001 START TEST exit_on_failed_rpc_init 00:06:20.001 ************************************ 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=75497 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 75497 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 75497 ']' 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.002 02:53:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.002 [2024-05-14 02:53:05.919508] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:20.002 [2024-05-14 02:53:05.919660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75497 ] 00:06:20.261 [2024-05-14 02:53:06.058360] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.261 [2024-05-14 02:53:06.079977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.261 [2024-05-14 02:53:06.115045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:21.197 02:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.197 [2024-05-14 02:53:06.972547] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:21.197 [2024-05-14 02:53:06.973163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75517 ] 00:06:21.197 [2024-05-14 02:53:07.113735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.197 [2024-05-14 02:53:07.138157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.197 [2024-05-14 02:53:07.180760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.197 [2024-05-14 02:53:07.180897] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:21.197 [2024-05-14 02:53:07.180932] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:21.197 [2024-05-14 02:53:07.180976] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 75497 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 75497 ']' 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 75497 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75497 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.457 killing process with pid 75497 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75497' 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 75497 00:06:21.457 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 75497 00:06:21.716 00:06:21.716 real 0m1.790s 00:06:21.716 user 0m2.135s 00:06:21.716 sys 0m0.435s 00:06:21.716 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.716 02:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.716 ************************************ 00:06:21.716 END TEST exit_on_failed_rpc_init 00:06:21.716 ************************************ 00:06:21.716 02:53:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.716 00:06:21.716 real 0m14.416s 00:06:21.716 user 0m13.943s 00:06:21.716 sys 0m1.518s 00:06:21.716 02:53:07 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.716 02:53:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.716 ************************************ 00:06:21.716 END TEST skip_rpc 00:06:21.716 ************************************ 00:06:21.716 02:53:07 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.716 02:53:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.716 02:53:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.716 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:21.716 ************************************ 00:06:21.716 START TEST rpc_client 00:06:21.716 ************************************ 00:06:21.716 02:53:07 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.976 * Looking for test storage... 00:06:21.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:21.976 02:53:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:21.976 OK 00:06:21.976 02:53:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:21.976 00:06:21.976 real 0m0.130s 00:06:21.976 user 0m0.053s 00:06:21.976 sys 0m0.082s 00:06:21.976 02:53:07 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.976 ************************************ 00:06:21.976 END TEST rpc_client 00:06:21.976 ************************************ 00:06:21.976 02:53:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:21.976 02:53:07 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.976 02:53:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.976 02:53:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.976 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:21.976 ************************************ 00:06:21.976 START TEST json_config 00:06:21.976 ************************************ 00:06:21.976 02:53:07 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.976 02:53:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:756e7923-3988-42eb-8b30-1352b2ea50fd 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=756e7923-3988-42eb-8b30-1352b2ea50fd 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.976 02:53:07 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.976 02:53:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.976 02:53:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.976 02:53:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.976 02:53:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.976 02:53:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.976 02:53:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.976 02:53:07 json_config -- paths/export.sh@5 -- # export PATH 00:06:21.977 02:53:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@47 -- # : 0 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.977 02:53:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:21.977 WARNING: No tests are enabled so not running JSON configuration tests 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:21.977 02:53:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:21.977 00:06:21.977 real 0m0.077s 00:06:21.977 user 0m0.043s 00:06:21.977 sys 0m0.034s 00:06:21.977 02:53:07 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.977 ************************************ 00:06:21.977 END TEST json_config 00:06:21.977 ************************************ 00:06:21.977 02:53:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.236 02:53:08 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.236 02:53:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.236 02:53:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.236 02:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:22.236 ************************************ 00:06:22.236 START TEST json_config_extra_key 00:06:22.236 ************************************ 00:06:22.236 02:53:08 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.236 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.236 02:53:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:756e7923-3988-42eb-8b30-1352b2ea50fd 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=756e7923-3988-42eb-8b30-1352b2ea50fd 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.237 02:53:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.237 02:53:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.237 02:53:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.237 02:53:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.237 02:53:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.237 02:53:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.237 02:53:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:22.237 02:53:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.237 02:53:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:22.237 INFO: launching applications... 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:22.237 02:53:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=75670 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:22.237 Waiting for target to run... 00:06:22.237 02:53:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 75670 /var/tmp/spdk_tgt.sock 00:06:22.237 02:53:08 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 75670 ']' 00:06:22.237 02:53:08 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.237 02:53:08 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.237 02:53:08 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.237 02:53:08 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.237 02:53:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.237 [2024-05-14 02:53:08.216538] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:22.237 [2024-05-14 02:53:08.216788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75670 ] 00:06:22.806 [2024-05-14 02:53:08.544331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.806 [2024-05-14 02:53:08.566604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.806 [2024-05-14 02:53:08.587302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.374 02:53:09 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.374 02:53:09 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:23.374 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:23.374 INFO: shutting down applications... 00:06:23.374 02:53:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:23.374 02:53:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 75670 ]] 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 75670 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75670 00:06:23.374 02:53:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75670 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:23.632 SPDK target shutdown done 00:06:23.632 02:53:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:23.632 Success 00:06:23.632 02:53:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:23.632 00:06:23.632 real 0m1.632s 00:06:23.632 user 0m1.415s 00:06:23.632 sys 0m0.418s 00:06:23.632 02:53:09 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.632 02:53:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.632 ************************************ 00:06:23.632 END TEST json_config_extra_key 00:06:23.632 ************************************ 00:06:23.891 02:53:09 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.891 02:53:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.891 02:53:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.891 02:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:23.891 ************************************ 00:06:23.891 START TEST alias_rpc 00:06:23.891 ************************************ 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.891 * Looking for test storage... 00:06:23.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:23.891 02:53:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.891 02:53:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=75735 00:06:23.891 02:53:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.891 02:53:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 75735 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 75735 ']' 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.891 02:53:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.891 [2024-05-14 02:53:09.881156] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:23.891 [2024-05-14 02:53:09.881313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75735 ] 00:06:24.150 [2024-05-14 02:53:10.017925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.150 [2024-05-14 02:53:10.035762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.150 [2024-05-14 02:53:10.069399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.087 02:53:10 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.087 02:53:10 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:25.087 02:53:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:25.087 02:53:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 75735 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 75735 ']' 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 75735 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75735 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.087 02:53:11 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.088 killing process with pid 75735 00:06:25.088 02:53:11 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75735' 00:06:25.088 02:53:11 alias_rpc -- common/autotest_common.sh@965 -- # kill 75735 00:06:25.088 02:53:11 alias_rpc -- common/autotest_common.sh@970 -- # wait 75735 00:06:25.347 00:06:25.347 real 0m1.656s 00:06:25.347 user 0m1.938s 00:06:25.347 sys 0m0.356s 00:06:25.347 02:53:11 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.347 02:53:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.347 ************************************ 00:06:25.347 END TEST alias_rpc 00:06:25.347 ************************************ 00:06:25.607 02:53:11 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:25.607 02:53:11 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:25.607 02:53:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.607 02:53:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.607 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 ************************************ 00:06:25.607 START TEST spdkcli_tcp 00:06:25.607 ************************************ 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:25.607 * Looking for test storage... 00:06:25.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=75812 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 75812 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 75812 ']' 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.607 02:53:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 02:53:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:25.607 [2024-05-14 02:53:11.605636] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:25.607 [2024-05-14 02:53:11.605845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75812 ] 00:06:25.867 [2024-05-14 02:53:11.757017] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.867 [2024-05-14 02:53:11.777278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.867 [2024-05-14 02:53:11.811929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.867 [2024-05-14 02:53:11.811977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.803 02:53:12 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.803 02:53:12 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:26.803 02:53:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=75828 00:06:26.803 02:53:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:26.803 02:53:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:26.803 [ 00:06:26.803 "bdev_malloc_delete", 00:06:26.803 "bdev_malloc_create", 00:06:26.803 "bdev_null_resize", 00:06:26.803 "bdev_null_delete", 00:06:26.803 "bdev_null_create", 00:06:26.803 "bdev_nvme_cuse_unregister", 00:06:26.803 "bdev_nvme_cuse_register", 00:06:26.803 "bdev_opal_new_user", 00:06:26.803 "bdev_opal_set_lock_state", 00:06:26.803 "bdev_opal_delete", 00:06:26.803 "bdev_opal_get_info", 00:06:26.803 "bdev_opal_create", 00:06:26.803 "bdev_nvme_opal_revert", 00:06:26.803 "bdev_nvme_opal_init", 00:06:26.803 "bdev_nvme_send_cmd", 00:06:26.803 "bdev_nvme_get_path_iostat", 00:06:26.803 "bdev_nvme_get_mdns_discovery_info", 00:06:26.803 "bdev_nvme_stop_mdns_discovery", 00:06:26.803 "bdev_nvme_start_mdns_discovery", 00:06:26.803 "bdev_nvme_set_multipath_policy", 00:06:26.803 "bdev_nvme_set_preferred_path", 00:06:26.803 "bdev_nvme_get_io_paths", 00:06:26.803 "bdev_nvme_remove_error_injection", 00:06:26.803 "bdev_nvme_add_error_injection", 00:06:26.803 "bdev_nvme_get_discovery_info", 00:06:26.803 "bdev_nvme_stop_discovery", 00:06:26.803 "bdev_nvme_start_discovery", 00:06:26.803 "bdev_nvme_get_controller_health_info", 00:06:26.803 "bdev_nvme_disable_controller", 00:06:26.803 "bdev_nvme_enable_controller", 00:06:26.803 "bdev_nvme_reset_controller", 00:06:26.803 "bdev_nvme_get_transport_statistics", 00:06:26.803 "bdev_nvme_apply_firmware", 00:06:26.803 "bdev_nvme_detach_controller", 00:06:26.803 "bdev_nvme_get_controllers", 00:06:26.803 "bdev_nvme_attach_controller", 00:06:26.803 "bdev_nvme_set_hotplug", 00:06:26.803 "bdev_nvme_set_options", 00:06:26.803 "bdev_passthru_delete", 00:06:26.803 "bdev_passthru_create", 00:06:26.803 "bdev_lvol_check_shallow_copy", 00:06:26.803 "bdev_lvol_start_shallow_copy", 00:06:26.803 "bdev_lvol_grow_lvstore", 00:06:26.803 "bdev_lvol_get_lvols", 00:06:26.803 "bdev_lvol_get_lvstores", 00:06:26.803 "bdev_lvol_delete", 00:06:26.803 "bdev_lvol_set_read_only", 00:06:26.803 "bdev_lvol_resize", 00:06:26.803 "bdev_lvol_decouple_parent", 00:06:26.803 "bdev_lvol_inflate", 00:06:26.803 "bdev_lvol_rename", 00:06:26.803 "bdev_lvol_clone_bdev", 00:06:26.803 "bdev_lvol_clone", 00:06:26.803 "bdev_lvol_snapshot", 00:06:26.803 "bdev_lvol_create", 00:06:26.803 "bdev_lvol_delete_lvstore", 00:06:26.803 "bdev_lvol_rename_lvstore", 00:06:26.803 "bdev_lvol_create_lvstore", 00:06:26.803 "bdev_raid_set_options", 00:06:26.803 "bdev_raid_remove_base_bdev", 00:06:26.803 "bdev_raid_add_base_bdev", 00:06:26.803 "bdev_raid_delete", 00:06:26.803 "bdev_raid_create", 00:06:26.803 "bdev_raid_get_bdevs", 00:06:26.803 "bdev_error_inject_error", 00:06:26.803 "bdev_error_delete", 00:06:26.803 "bdev_error_create", 00:06:26.803 "bdev_split_delete", 00:06:26.803 "bdev_split_create", 00:06:26.803 "bdev_delay_delete", 00:06:26.803 "bdev_delay_create", 00:06:26.803 "bdev_delay_update_latency", 00:06:26.803 "bdev_zone_block_delete", 00:06:26.803 "bdev_zone_block_create", 00:06:26.803 "blobfs_create", 00:06:26.803 "blobfs_detect", 00:06:26.803 "blobfs_set_cache_size", 00:06:26.803 "bdev_xnvme_delete", 00:06:26.803 "bdev_xnvme_create", 00:06:26.803 "bdev_aio_delete", 00:06:26.803 "bdev_aio_rescan", 00:06:26.803 "bdev_aio_create", 00:06:26.803 "bdev_ftl_set_property", 00:06:26.803 "bdev_ftl_get_properties", 00:06:26.803 "bdev_ftl_get_stats", 00:06:26.803 "bdev_ftl_unmap", 00:06:26.803 "bdev_ftl_unload", 00:06:26.803 "bdev_ftl_delete", 00:06:26.803 "bdev_ftl_load", 00:06:26.803 "bdev_ftl_create", 00:06:26.803 "bdev_virtio_attach_controller", 00:06:26.803 "bdev_virtio_scsi_get_devices", 00:06:26.803 "bdev_virtio_detach_controller", 00:06:26.803 "bdev_virtio_blk_set_hotplug", 00:06:26.803 "bdev_iscsi_delete", 00:06:26.803 "bdev_iscsi_create", 00:06:26.803 "bdev_iscsi_set_options", 00:06:26.803 "accel_error_inject_error", 00:06:26.803 "ioat_scan_accel_module", 00:06:26.803 "dsa_scan_accel_module", 00:06:26.803 "iaa_scan_accel_module", 00:06:26.803 "keyring_file_remove_key", 00:06:26.803 "keyring_file_add_key", 00:06:26.803 "iscsi_get_histogram", 00:06:26.803 "iscsi_enable_histogram", 00:06:26.803 "iscsi_set_options", 00:06:26.803 "iscsi_get_auth_groups", 00:06:26.803 "iscsi_auth_group_remove_secret", 00:06:26.803 "iscsi_auth_group_add_secret", 00:06:26.803 "iscsi_delete_auth_group", 00:06:26.803 "iscsi_create_auth_group", 00:06:26.803 "iscsi_set_discovery_auth", 00:06:26.803 "iscsi_get_options", 00:06:26.803 "iscsi_target_node_request_logout", 00:06:26.803 "iscsi_target_node_set_redirect", 00:06:26.803 "iscsi_target_node_set_auth", 00:06:26.803 "iscsi_target_node_add_lun", 00:06:26.803 "iscsi_get_stats", 00:06:26.803 "iscsi_get_connections", 00:06:26.803 "iscsi_portal_group_set_auth", 00:06:26.803 "iscsi_start_portal_group", 00:06:26.803 "iscsi_delete_portal_group", 00:06:26.804 "iscsi_create_portal_group", 00:06:26.804 "iscsi_get_portal_groups", 00:06:26.804 "iscsi_delete_target_node", 00:06:26.804 "iscsi_target_node_remove_pg_ig_maps", 00:06:26.804 "iscsi_target_node_add_pg_ig_maps", 00:06:26.804 "iscsi_create_target_node", 00:06:26.804 "iscsi_get_target_nodes", 00:06:26.804 "iscsi_delete_initiator_group", 00:06:26.804 "iscsi_initiator_group_remove_initiators", 00:06:26.804 "iscsi_initiator_group_add_initiators", 00:06:26.804 "iscsi_create_initiator_group", 00:06:26.804 "iscsi_get_initiator_groups", 00:06:26.804 "nvmf_set_crdt", 00:06:26.804 "nvmf_set_config", 00:06:26.804 "nvmf_set_max_subsystems", 00:06:26.804 "nvmf_subsystem_get_listeners", 00:06:26.804 "nvmf_subsystem_get_qpairs", 00:06:26.804 "nvmf_subsystem_get_controllers", 00:06:26.804 "nvmf_get_stats", 00:06:26.804 "nvmf_get_transports", 00:06:26.804 "nvmf_create_transport", 00:06:26.804 "nvmf_get_targets", 00:06:26.804 "nvmf_delete_target", 00:06:26.804 "nvmf_create_target", 00:06:26.804 "nvmf_subsystem_allow_any_host", 00:06:26.804 "nvmf_subsystem_remove_host", 00:06:26.804 "nvmf_subsystem_add_host", 00:06:26.804 "nvmf_ns_remove_host", 00:06:26.804 "nvmf_ns_add_host", 00:06:26.804 "nvmf_subsystem_remove_ns", 00:06:26.804 "nvmf_subsystem_add_ns", 00:06:26.804 "nvmf_subsystem_listener_set_ana_state", 00:06:26.804 "nvmf_discovery_get_referrals", 00:06:26.804 "nvmf_discovery_remove_referral", 00:06:26.804 "nvmf_discovery_add_referral", 00:06:26.804 "nvmf_subsystem_remove_listener", 00:06:26.804 "nvmf_subsystem_add_listener", 00:06:26.804 "nvmf_delete_subsystem", 00:06:26.804 "nvmf_create_subsystem", 00:06:26.804 "nvmf_get_subsystems", 00:06:26.804 "env_dpdk_get_mem_stats", 00:06:26.804 "nbd_get_disks", 00:06:26.804 "nbd_stop_disk", 00:06:26.804 "nbd_start_disk", 00:06:26.804 "ublk_recover_disk", 00:06:26.804 "ublk_get_disks", 00:06:26.804 "ublk_stop_disk", 00:06:26.804 "ublk_start_disk", 00:06:26.804 "ublk_destroy_target", 00:06:26.804 "ublk_create_target", 00:06:26.804 "virtio_blk_create_transport", 00:06:26.804 "virtio_blk_get_transports", 00:06:26.804 "vhost_controller_set_coalescing", 00:06:26.804 "vhost_get_controllers", 00:06:26.804 "vhost_delete_controller", 00:06:26.804 "vhost_create_blk_controller", 00:06:26.804 "vhost_scsi_controller_remove_target", 00:06:26.804 "vhost_scsi_controller_add_target", 00:06:26.804 "vhost_start_scsi_controller", 00:06:26.804 "vhost_create_scsi_controller", 00:06:26.804 "thread_set_cpumask", 00:06:26.804 "framework_get_scheduler", 00:06:26.804 "framework_set_scheduler", 00:06:26.804 "framework_get_reactors", 00:06:26.804 "thread_get_io_channels", 00:06:26.804 "thread_get_pollers", 00:06:26.804 "thread_get_stats", 00:06:26.804 "framework_monitor_context_switch", 00:06:26.804 "spdk_kill_instance", 00:06:26.804 "log_enable_timestamps", 00:06:26.804 "log_get_flags", 00:06:26.804 "log_clear_flag", 00:06:26.804 "log_set_flag", 00:06:26.804 "log_get_level", 00:06:26.804 "log_set_level", 00:06:26.804 "log_get_print_level", 00:06:26.804 "log_set_print_level", 00:06:26.804 "framework_enable_cpumask_locks", 00:06:26.804 "framework_disable_cpumask_locks", 00:06:26.804 "framework_wait_init", 00:06:26.804 "framework_start_init", 00:06:26.804 "scsi_get_devices", 00:06:26.804 "bdev_get_histogram", 00:06:26.804 "bdev_enable_histogram", 00:06:26.804 "bdev_set_qos_limit", 00:06:26.804 "bdev_set_qd_sampling_period", 00:06:26.804 "bdev_get_bdevs", 00:06:26.804 "bdev_reset_iostat", 00:06:26.804 "bdev_get_iostat", 00:06:26.804 "bdev_examine", 00:06:26.804 "bdev_wait_for_examine", 00:06:26.804 "bdev_set_options", 00:06:26.804 "notify_get_notifications", 00:06:26.804 "notify_get_types", 00:06:26.804 "accel_get_stats", 00:06:26.804 "accel_set_options", 00:06:26.804 "accel_set_driver", 00:06:26.804 "accel_crypto_key_destroy", 00:06:26.804 "accel_crypto_keys_get", 00:06:26.804 "accel_crypto_key_create", 00:06:26.804 "accel_assign_opc", 00:06:26.804 "accel_get_module_info", 00:06:26.804 "accel_get_opc_assignments", 00:06:26.804 "vmd_rescan", 00:06:26.804 "vmd_remove_device", 00:06:26.804 "vmd_enable", 00:06:26.804 "sock_get_default_impl", 00:06:26.804 "sock_set_default_impl", 00:06:26.804 "sock_impl_set_options", 00:06:26.804 "sock_impl_get_options", 00:06:26.804 "iobuf_get_stats", 00:06:26.804 "iobuf_set_options", 00:06:26.804 "framework_get_pci_devices", 00:06:26.804 "framework_get_config", 00:06:26.804 "framework_get_subsystems", 00:06:26.804 "trace_get_info", 00:06:26.804 "trace_get_tpoint_group_mask", 00:06:26.804 "trace_disable_tpoint_group", 00:06:26.804 "trace_enable_tpoint_group", 00:06:26.804 "trace_clear_tpoint_mask", 00:06:26.804 "trace_set_tpoint_mask", 00:06:26.804 "keyring_get_keys", 00:06:26.804 "spdk_get_version", 00:06:26.804 "rpc_get_methods" 00:06:26.804 ] 00:06:26.804 02:53:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.804 02:53:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:26.804 02:53:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 75812 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 75812 ']' 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 75812 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75812 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.804 killing process with pid 75812 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75812' 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 75812 00:06:26.804 02:53:12 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 75812 00:06:27.064 00:06:27.064 real 0m1.677s 00:06:27.064 user 0m3.108s 00:06:27.064 sys 0m0.409s 00:06:27.064 02:53:13 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.064 ************************************ 00:06:27.064 END TEST spdkcli_tcp 00:06:27.064 ************************************ 00:06:27.064 02:53:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.333 02:53:13 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.333 02:53:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.333 02:53:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.333 02:53:13 -- common/autotest_common.sh@10 -- # set +x 00:06:27.333 ************************************ 00:06:27.333 START TEST dpdk_mem_utility 00:06:27.333 ************************************ 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.333 * Looking for test storage... 00:06:27.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:27.333 02:53:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:27.333 02:53:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75899 00:06:27.333 02:53:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75899 00:06:27.333 02:53:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 75899 ']' 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.333 02:53:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.333 [2024-05-14 02:53:13.310280] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:27.333 [2024-05-14 02:53:13.310431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75899 ] 00:06:27.592 [2024-05-14 02:53:13.448753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.592 [2024-05-14 02:53:13.465713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.592 [2024-05-14 02:53:13.499958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.160 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.160 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:28.160 02:53:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:28.160 02:53:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:28.160 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.160 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.160 { 00:06:28.160 "filename": "/tmp/spdk_mem_dump.txt" 00:06:28.160 } 00:06:28.160 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.160 02:53:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.421 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:28.421 1 heaps totaling size 814.000000 MiB 00:06:28.421 size: 814.000000 MiB heap id: 0 00:06:28.421 end heaps---------- 00:06:28.421 8 mempools totaling size 598.116089 MiB 00:06:28.421 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:28.421 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:28.421 size: 84.521057 MiB name: bdev_io_75899 00:06:28.421 size: 51.011292 MiB name: evtpool_75899 00:06:28.421 size: 50.003479 MiB name: msgpool_75899 00:06:28.421 size: 21.763794 MiB name: PDU_Pool 00:06:28.421 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:28.421 size: 0.026123 MiB name: Session_Pool 00:06:28.421 end mempools------- 00:06:28.421 6 memzones totaling size 4.142822 MiB 00:06:28.421 size: 1.000366 MiB name: RG_ring_0_75899 00:06:28.421 size: 1.000366 MiB name: RG_ring_1_75899 00:06:28.421 size: 1.000366 MiB name: RG_ring_4_75899 00:06:28.421 size: 1.000366 MiB name: RG_ring_5_75899 00:06:28.421 size: 0.125366 MiB name: RG_ring_2_75899 00:06:28.421 size: 0.015991 MiB name: RG_ring_3_75899 00:06:28.421 end memzones------- 00:06:28.421 02:53:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:28.421 heap id: 0 total size: 814.000000 MiB number of busy elements: 309 number of free elements: 15 00:06:28.421 list of free elements. size: 12.470276 MiB 00:06:28.421 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:28.421 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:28.421 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:28.421 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:28.421 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:28.421 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:28.421 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:28.421 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:28.421 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:28.421 element at address: 0x20001aa00000 with size: 0.567688 MiB 00:06:28.421 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:28.421 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:28.421 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:28.421 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:28.421 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:28.421 list of standard malloc elements. size: 199.267151 MiB 00:06:28.421 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:28.421 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:28.421 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:28.421 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:28.421 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:28.421 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:28.421 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:28.421 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:28.421 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:28.421 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:28.421 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:28.422 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:28.422 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:28.423 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:28.423 list of memzone associated elements. size: 602.262573 MiB 00:06:28.423 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:28.423 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:28.423 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:28.423 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:28.423 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:28.423 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75899_0 00:06:28.423 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:28.423 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75899_0 00:06:28.423 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:28.423 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75899_0 00:06:28.423 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:28.423 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:28.423 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:28.423 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:28.423 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:28.423 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75899 00:06:28.423 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:28.423 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75899 00:06:28.423 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:28.423 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75899 00:06:28.423 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:28.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:28.423 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:28.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:28.423 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:28.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:28.423 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:28.423 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:28.423 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:28.423 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75899 00:06:28.423 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:28.423 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75899 00:06:28.423 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:28.423 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75899 00:06:28.423 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:28.423 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75899 00:06:28.423 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:28.423 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75899 00:06:28.423 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:28.423 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:28.423 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:28.423 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:28.423 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:28.423 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:28.423 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:28.423 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75899 00:06:28.423 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:28.423 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:28.423 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:28.423 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:28.423 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:28.423 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75899 00:06:28.423 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:28.423 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:28.423 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:28.423 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75899 00:06:28.423 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:28.423 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75899 00:06:28.423 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:28.423 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:28.423 02:53:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:28.423 02:53:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75899 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 75899 ']' 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 75899 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75899 00:06:28.423 killing process with pid 75899 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75899' 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 75899 00:06:28.423 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 75899 00:06:28.682 00:06:28.682 real 0m1.451s 00:06:28.682 user 0m1.524s 00:06:28.682 sys 0m0.388s 00:06:28.682 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.682 ************************************ 00:06:28.682 END TEST dpdk_mem_utility 00:06:28.682 02:53:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.682 ************************************ 00:06:28.682 02:53:14 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.682 02:53:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.682 02:53:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.682 02:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:28.682 ************************************ 00:06:28.682 START TEST event 00:06:28.682 ************************************ 00:06:28.682 02:53:14 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.682 * Looking for test storage... 00:06:28.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.941 02:53:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.941 02:53:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.941 02:53:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.941 02:53:14 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:28.941 02:53:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.941 02:53:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.941 ************************************ 00:06:28.941 START TEST event_perf 00:06:28.941 ************************************ 00:06:28.941 02:53:14 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.941 Running I/O for 1 seconds...[2024-05-14 02:53:14.762804] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:28.941 [2024-05-14 02:53:14.763108] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75971 ] 00:06:28.941 [2024-05-14 02:53:14.913098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.941 [2024-05-14 02:53:14.932368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.200 [2024-05-14 02:53:14.970606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.200 [2024-05-14 02:53:14.970763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.200 [2024-05-14 02:53:14.970827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.200 Running I/O for 1 seconds...[2024-05-14 02:53:14.970894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.136 00:06:30.136 lcore 0: 202918 00:06:30.136 lcore 1: 202917 00:06:30.136 lcore 2: 202918 00:06:30.136 lcore 3: 202919 00:06:30.136 done. 00:06:30.136 00:06:30.136 real 0m1.325s 00:06:30.136 user 0m4.104s 00:06:30.136 sys 0m0.099s 00:06:30.136 02:53:16 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.136 02:53:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.136 ************************************ 00:06:30.136 END TEST event_perf 00:06:30.136 ************************************ 00:06:30.136 02:53:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.136 02:53:16 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:30.136 02:53:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.136 02:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.136 ************************************ 00:06:30.136 START TEST event_reactor 00:06:30.136 ************************************ 00:06:30.136 02:53:16 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.136 [2024-05-14 02:53:16.129589] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:30.136 [2024-05-14 02:53:16.129737] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76011 ] 00:06:30.395 [2024-05-14 02:53:16.264560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.395 [2024-05-14 02:53:16.285777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.395 [2024-05-14 02:53:16.317986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.773 test_start 00:06:31.773 oneshot 00:06:31.773 tick 100 00:06:31.773 tick 100 00:06:31.773 tick 250 00:06:31.773 tick 100 00:06:31.773 tick 100 00:06:31.773 tick 100 00:06:31.773 tick 500 00:06:31.773 tick 250 00:06:31.773 tick 100 00:06:31.773 tick 100 00:06:31.773 tick 250 00:06:31.773 tick 100 00:06:31.773 tick 100 00:06:31.773 test_end 00:06:31.773 ************************************ 00:06:31.773 END TEST event_reactor 00:06:31.773 ************************************ 00:06:31.773 00:06:31.773 real 0m1.296s 00:06:31.773 user 0m1.117s 00:06:31.773 sys 0m0.073s 00:06:31.773 02:53:17 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.773 02:53:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:31.773 02:53:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.773 02:53:17 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:31.773 02:53:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.773 02:53:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.773 ************************************ 00:06:31.773 START TEST event_reactor_perf 00:06:31.773 ************************************ 00:06:31.773 02:53:17 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.773 [2024-05-14 02:53:17.490457] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:31.773 [2024-05-14 02:53:17.490651] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76042 ] 00:06:31.773 [2024-05-14 02:53:17.638709] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.773 [2024-05-14 02:53:17.659418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.773 [2024-05-14 02:53:17.692539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.150 test_start 00:06:33.150 test_end 00:06:33.150 Performance: 317517 events per second 00:06:33.150 00:06:33.150 real 0m1.324s 00:06:33.150 user 0m1.119s 00:06:33.150 sys 0m0.096s 00:06:33.150 ************************************ 00:06:33.150 END TEST event_reactor_perf 00:06:33.150 ************************************ 00:06:33.150 02:53:18 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.150 02:53:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.150 02:53:18 event -- event/event.sh@49 -- # uname -s 00:06:33.150 02:53:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.150 02:53:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.150 02:53:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.150 02:53:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.150 02:53:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.150 ************************************ 00:06:33.150 START TEST event_scheduler 00:06:33.150 ************************************ 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.150 * Looking for test storage... 00:06:33.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:33.150 02:53:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.150 02:53:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=76104 00:06:33.150 02:53:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.150 02:53:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.150 02:53:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 76104 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 76104 ']' 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.150 02:53:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.150 [2024-05-14 02:53:19.010683] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:33.150 [2024-05-14 02:53:19.011148] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76104 ] 00:06:33.150 [2024-05-14 02:53:19.161582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.409 [2024-05-14 02:53:19.185033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.409 [2024-05-14 02:53:19.230347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.409 [2024-05-14 02:53:19.230498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.409 [2024-05-14 02:53:19.230580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.409 [2024-05-14 02:53:19.230644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.975 02:53:19 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.975 02:53:19 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:33.975 02:53:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:33.975 02:53:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.975 02:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 POWER: Env isn't set yet! 00:06:33.975 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:33.975 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.975 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.975 POWER: Attempting to initialise PSTAT power management... 00:06:33.975 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.975 POWER: Cannot set governor of lcore 0 to performance 00:06:33.975 POWER: Attempting to initialise AMD PSTATE power management... 00:06:33.975 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.975 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.975 POWER: Attempting to initialise CPPC power management... 00:06:33.975 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.975 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.975 POWER: Attempting to initialise VM power management... 00:06:33.975 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:33.975 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:33.975 POWER: Unable to set Power Management Environment for lcore 0 00:06:33.975 [2024-05-14 02:53:19.964782] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:33.975 [2024-05-14 02:53:19.964825] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:33.975 [2024-05-14 02:53:19.964841] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:33.976 02:53:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.976 02:53:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.976 02:53:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.976 02:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 [2024-05-14 02:53:20.018797] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.235 02:53:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.235 02:53:20 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.235 02:53:20 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 ************************************ 00:06:34.235 START TEST scheduler_create_thread 00:06:34.235 ************************************ 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 2 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 3 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 4 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 5 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 6 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 7 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 8 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 9 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 10 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.235 02:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.610 02:53:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.610 02:53:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:35.610 02:53:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:35.610 02:53:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.610 02:53:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.985 ************************************ 00:06:36.985 END TEST scheduler_create_thread 00:06:36.985 ************************************ 00:06:36.985 02:53:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.985 00:06:36.985 real 0m2.618s 00:06:36.985 user 0m0.016s 00:06:36.985 sys 0m0.008s 00:06:36.985 02:53:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.985 02:53:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.985 02:53:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:36.985 02:53:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 76104 00:06:36.985 02:53:22 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 76104 ']' 00:06:36.985 02:53:22 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 76104 00:06:36.985 02:53:22 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:36.985 02:53:22 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.985 02:53:22 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76104 00:06:36.985 killing process with pid 76104 00:06:36.986 02:53:22 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:36.986 02:53:22 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:36.986 02:53:22 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76104' 00:06:36.986 02:53:22 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 76104 00:06:36.986 02:53:22 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 76104 00:06:37.244 [2024-05-14 02:53:23.131904] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.504 00:06:37.504 real 0m4.495s 00:06:37.504 user 0m8.484s 00:06:37.504 sys 0m0.408s 00:06:37.504 ************************************ 00:06:37.504 END TEST event_scheduler 00:06:37.504 ************************************ 00:06:37.504 02:53:23 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.504 02:53:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.504 02:53:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.504 02:53:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.504 02:53:23 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.504 02:53:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.504 02:53:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.504 ************************************ 00:06:37.504 START TEST app_repeat 00:06:37.504 ************************************ 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=76205 00:06:37.504 Process app_repeat pid: 76205 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 76205' 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.504 spdk_app_start Round 0 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.504 02:53:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76205 /var/tmp/spdk-nbd.sock 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 76205 ']' 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.504 02:53:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.504 [2024-05-14 02:53:23.444694] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:37.504 [2024-05-14 02:53:23.444898] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76205 ] 00:06:37.763 [2024-05-14 02:53:23.593463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.763 [2024-05-14 02:53:23.611492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.763 [2024-05-14 02:53:23.646106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.763 [2024-05-14 02:53:23.646500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.698 02:53:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.698 02:53:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:38.698 02:53:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.698 Malloc0 00:06:38.698 02:53:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.958 Malloc1 00:06:38.958 02:53:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.958 02:53:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.217 /dev/nbd0 00:06:39.217 02:53:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.217 02:53:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.217 1+0 records in 00:06:39.217 1+0 records out 00:06:39.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209313 s, 19.6 MB/s 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.217 02:53:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:39.217 02:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.217 02:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.217 02:53:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.477 /dev/nbd1 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.477 1+0 records in 00:06:39.477 1+0 records out 00:06:39.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408611 s, 10.0 MB/s 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.477 02:53:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.477 02:53:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.736 { 00:06:39.736 "nbd_device": "/dev/nbd0", 00:06:39.736 "bdev_name": "Malloc0" 00:06:39.736 }, 00:06:39.736 { 00:06:39.736 "nbd_device": "/dev/nbd1", 00:06:39.736 "bdev_name": "Malloc1" 00:06:39.736 } 00:06:39.736 ]' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.736 { 00:06:39.736 "nbd_device": "/dev/nbd0", 00:06:39.736 "bdev_name": "Malloc0" 00:06:39.736 }, 00:06:39.736 { 00:06:39.736 "nbd_device": "/dev/nbd1", 00:06:39.736 "bdev_name": "Malloc1" 00:06:39.736 } 00:06:39.736 ]' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.736 /dev/nbd1' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.736 /dev/nbd1' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.736 256+0 records in 00:06:39.736 256+0 records out 00:06:39.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010741 s, 97.6 MB/s 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.736 256+0 records in 00:06:39.736 256+0 records out 00:06:39.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025541 s, 41.1 MB/s 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.736 02:53:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.995 256+0 records in 00:06:39.995 256+0 records out 00:06:39.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261289 s, 40.1 MB/s 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.995 02:53:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.253 02:53:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.254 02:53:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.537 02:53:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.811 02:53:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.811 02:53:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.071 02:53:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.071 [2024-05-14 02:53:26.998613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.071 [2024-05-14 02:53:27.030813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.071 [2024-05-14 02:53:27.030824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.071 [2024-05-14 02:53:27.061721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.071 [2024-05-14 02:53:27.061849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.357 02:53:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.357 spdk_app_start Round 1 00:06:44.357 02:53:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:44.357 02:53:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76205 /var/tmp/spdk-nbd.sock 00:06:44.357 02:53:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 76205 ']' 00:06:44.357 02:53:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.357 02:53:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.357 02:53:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.357 02:53:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.357 02:53:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.357 02:53:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.357 02:53:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:44.357 02:53:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.616 Malloc0 00:06:44.616 02:53:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.616 Malloc1 00:06:44.616 02:53:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.616 02:53:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.873 /dev/nbd0 00:06:45.132 02:53:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.132 02:53:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.132 1+0 records in 00:06:45.132 1+0 records out 00:06:45.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219612 s, 18.7 MB/s 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:45.132 02:53:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:45.132 02:53:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.132 02:53:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.132 02:53:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.132 /dev/nbd1 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.391 1+0 records in 00:06:45.391 1+0 records out 00:06:45.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263662 s, 15.5 MB/s 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:45.391 02:53:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.391 02:53:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.391 { 00:06:45.391 "nbd_device": "/dev/nbd0", 00:06:45.391 "bdev_name": "Malloc0" 00:06:45.392 }, 00:06:45.392 { 00:06:45.392 "nbd_device": "/dev/nbd1", 00:06:45.392 "bdev_name": "Malloc1" 00:06:45.392 } 00:06:45.392 ]' 00:06:45.392 02:53:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.392 { 00:06:45.392 "nbd_device": "/dev/nbd0", 00:06:45.392 "bdev_name": "Malloc0" 00:06:45.392 }, 00:06:45.392 { 00:06:45.392 "nbd_device": "/dev/nbd1", 00:06:45.392 "bdev_name": "Malloc1" 00:06:45.392 } 00:06:45.392 ]' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.651 /dev/nbd1' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.651 /dev/nbd1' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.651 256+0 records in 00:06:45.651 256+0 records out 00:06:45.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103424 s, 101 MB/s 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.651 256+0 records in 00:06:45.651 256+0 records out 00:06:45.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243634 s, 43.0 MB/s 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.651 256+0 records in 00:06:45.651 256+0 records out 00:06:45.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028415 s, 36.9 MB/s 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.651 02:53:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.911 02:53:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.169 02:53:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.428 02:53:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.428 02:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.428 02:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.687 02:53:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.687 02:53:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.687 02:53:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.945 [2024-05-14 02:53:32.807851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.945 [2024-05-14 02:53:32.838458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.945 [2024-05-14 02:53:32.838459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.945 [2024-05-14 02:53:32.866535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.945 [2024-05-14 02:53:32.866623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.915 spdk_app_start Round 2 00:06:49.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.915 02:53:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.915 02:53:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:49.915 02:53:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76205 /var/tmp/spdk-nbd.sock 00:06:49.915 02:53:35 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 76205 ']' 00:06:49.915 02:53:35 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.915 02:53:35 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.915 02:53:35 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.915 02:53:35 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.915 02:53:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.174 02:53:35 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.174 02:53:35 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:50.174 02:53:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.174 Malloc0 00:06:50.174 02:53:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.433 Malloc1 00:06:50.433 02:53:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.433 02:53:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.691 /dev/nbd0 00:06:50.691 02:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.691 02:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:50.691 02:53:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.692 1+0 records in 00:06:50.692 1+0 records out 00:06:50.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402661 s, 10.2 MB/s 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:50.692 02:53:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:50.692 02:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.692 02:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.692 02:53:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.953 /dev/nbd1 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.953 1+0 records in 00:06:50.953 1+0 records out 00:06:50.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313385 s, 13.1 MB/s 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:50.953 02:53:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.953 02:53:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.211 02:53:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.211 { 00:06:51.211 "nbd_device": "/dev/nbd0", 00:06:51.211 "bdev_name": "Malloc0" 00:06:51.211 }, 00:06:51.211 { 00:06:51.211 "nbd_device": "/dev/nbd1", 00:06:51.211 "bdev_name": "Malloc1" 00:06:51.211 } 00:06:51.211 ]' 00:06:51.211 02:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.211 { 00:06:51.211 "nbd_device": "/dev/nbd0", 00:06:51.211 "bdev_name": "Malloc0" 00:06:51.211 }, 00:06:51.211 { 00:06:51.211 "nbd_device": "/dev/nbd1", 00:06:51.211 "bdev_name": "Malloc1" 00:06:51.211 } 00:06:51.211 ]' 00:06:51.211 02:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.471 /dev/nbd1' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.471 /dev/nbd1' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.471 256+0 records in 00:06:51.471 256+0 records out 00:06:51.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007082 s, 148 MB/s 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.471 256+0 records in 00:06:51.471 256+0 records out 00:06:51.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268641 s, 39.0 MB/s 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.471 256+0 records in 00:06:51.471 256+0 records out 00:06:51.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328761 s, 31.9 MB/s 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.471 02:53:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.730 02:53:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.988 02:53:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.248 02:53:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.248 02:53:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.507 02:53:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.507 [2024-05-14 02:53:38.449224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.507 [2024-05-14 02:53:38.485154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.507 [2024-05-14 02:53:38.485157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.507 [2024-05-14 02:53:38.513711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.507 [2024-05-14 02:53:38.513815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.825 02:53:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 76205 /var/tmp/spdk-nbd.sock 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 76205 ']' 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:55.825 02:53:41 event.app_repeat -- event/event.sh@39 -- # killprocess 76205 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 76205 ']' 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 76205 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76205 00:06:55.825 killing process with pid 76205 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76205' 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@965 -- # kill 76205 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@970 -- # wait 76205 00:06:55.825 spdk_app_start is called in Round 0. 00:06:55.825 Shutdown signal received, stop current app iteration 00:06:55.825 Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 reinitialization... 00:06:55.825 spdk_app_start is called in Round 1. 00:06:55.825 Shutdown signal received, stop current app iteration 00:06:55.825 Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 reinitialization... 00:06:55.825 spdk_app_start is called in Round 2. 00:06:55.825 Shutdown signal received, stop current app iteration 00:06:55.825 Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 reinitialization... 00:06:55.825 spdk_app_start is called in Round 3. 00:06:55.825 Shutdown signal received, stop current app iteration 00:06:55.825 ************************************ 00:06:55.825 END TEST app_repeat 00:06:55.825 02:53:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:55.825 02:53:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:55.825 00:06:55.825 real 0m18.383s 00:06:55.825 user 0m41.457s 00:06:55.825 sys 0m2.582s 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.825 02:53:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.825 ************************************ 00:06:55.825 02:53:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:55.825 02:53:41 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:55.825 02:53:41 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.825 02:53:41 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.825 02:53:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.825 ************************************ 00:06:55.825 START TEST cpu_locks 00:06:55.825 ************************************ 00:06:55.825 02:53:41 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.083 * Looking for test storage... 00:06:56.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:56.083 02:53:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.083 02:53:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.083 02:53:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.083 02:53:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.083 02:53:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:56.083 02:53:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.083 02:53:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 ************************************ 00:06:56.083 START TEST default_locks 00:06:56.083 ************************************ 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:56.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=76634 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 76634 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 76634 ']' 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.083 02:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 [2024-05-14 02:53:42.018718] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:56.083 [2024-05-14 02:53:42.018914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76634 ] 00:06:56.342 [2024-05-14 02:53:42.166106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.342 [2024-05-14 02:53:42.183993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.342 [2024-05-14 02:53:42.219211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.276 02:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.276 02:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:57.276 02:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 76634 00:06:57.276 02:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 76634 00:06:57.276 02:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 76634 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 76634 ']' 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 76634 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76634 00:06:57.534 killing process with pid 76634 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76634' 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 76634 00:06:57.534 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 76634 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 76634 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76634 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.793 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 76634 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 76634 ']' 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.794 ERROR: process (pid: 76634) is no longer running 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.794 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76634) - No such process 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.794 00:06:57.794 real 0m1.785s 00:06:57.794 user 0m1.945s 00:06:57.794 sys 0m0.539s 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.794 02:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.794 ************************************ 00:06:57.794 END TEST default_locks 00:06:57.794 ************************************ 00:06:57.794 02:53:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:57.794 02:53:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.794 02:53:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.794 02:53:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.794 ************************************ 00:06:57.794 START TEST default_locks_via_rpc 00:06:57.794 ************************************ 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:57.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=76682 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 76682 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76682 ']' 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.794 02:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.052 [2024-05-14 02:53:43.837751] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:58.052 [2024-05-14 02:53:43.837879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76682 ] 00:06:58.052 [2024-05-14 02:53:43.971452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.052 [2024-05-14 02:53:43.991516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.052 [2024-05-14 02:53:44.025232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 76682 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 76682 00:06:58.986 02:53:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 76682 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 76682 ']' 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 76682 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76682 00:06:59.245 killing process with pid 76682 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76682' 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 76682 00:06:59.245 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 76682 00:06:59.502 00:06:59.502 real 0m1.704s 00:06:59.502 user 0m1.876s 00:06:59.502 sys 0m0.447s 00:06:59.502 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.502 02:53:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.502 ************************************ 00:06:59.502 END TEST default_locks_via_rpc 00:06:59.502 ************************************ 00:06:59.502 02:53:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.502 02:53:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.502 02:53:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.502 02:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.502 ************************************ 00:06:59.502 START TEST non_locking_app_on_locked_coremask 00:06:59.502 ************************************ 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=76728 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 76728 /var/tmp/spdk.sock 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76728 ']' 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.502 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.503 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.503 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.503 02:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.760 [2024-05-14 02:53:45.590627] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:06:59.760 [2024-05-14 02:53:45.590772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76728 ] 00:06:59.760 [2024-05-14 02:53:45.728525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.760 [2024-05-14 02:53:45.747243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.760 [2024-05-14 02:53:45.781067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=76744 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 76744 /var/tmp/spdk2.sock 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76744 ']' 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.697 02:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 [2024-05-14 02:53:46.596292] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:00.697 [2024-05-14 02:53:46.596651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76744 ] 00:07:00.955 [2024-05-14 02:53:46.739664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.955 [2024-05-14 02:53:46.759966] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.955 [2024-05-14 02:53:46.760017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.955 [2024-05-14 02:53:46.833132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.522 02:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.522 02:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:01.522 02:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 76728 00:07:01.522 02:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.522 02:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76728 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 76728 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76728 ']' 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76728 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76728 00:07:02.089 killing process with pid 76728 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76728' 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76728 00:07:02.089 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76728 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 76744 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76744 ']' 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76744 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76744 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.657 killing process with pid 76744 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76744' 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76744 00:07:02.657 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76744 00:07:02.916 00:07:02.916 real 0m3.403s 00:07:02.916 user 0m3.876s 00:07:02.916 sys 0m0.908s 00:07:02.916 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.916 ************************************ 00:07:02.916 END TEST non_locking_app_on_locked_coremask 00:07:02.916 ************************************ 00:07:02.916 02:53:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.174 02:53:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:03.174 02:53:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.174 02:53:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.174 02:53:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.174 ************************************ 00:07:03.174 START TEST locking_app_on_unlocked_coremask 00:07:03.174 ************************************ 00:07:03.174 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:03.174 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76808 00:07:03.174 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76808 /var/tmp/spdk.sock 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76808 ']' 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.175 02:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.175 [2024-05-14 02:53:49.077024] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:03.175 [2024-05-14 02:53:49.077528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76808 ] 00:07:03.433 [2024-05-14 02:53:49.233497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.433 [2024-05-14 02:53:49.254387] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.433 [2024-05-14 02:53:49.254428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.433 [2024-05-14 02:53:49.284879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76824 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76824 /var/tmp/spdk2.sock 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76824 ']' 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.609 02:53:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.609 [2024-05-14 02:53:50.106504] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:04.609 [2024-05-14 02:53:50.107160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76824 ] 00:07:04.609 [2024-05-14 02:53:50.254753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.609 [2024-05-14 02:53:50.279850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.609 [2024-05-14 02:53:50.342986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.176 02:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.176 02:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:05.176 02:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76824 00:07:05.176 02:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76824 00:07:05.176 02:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.113 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76808 00:07:06.113 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76808 ']' 00:07:06.113 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76808 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76808 00:07:06.114 killing process with pid 76808 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76808' 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76808 00:07:06.114 02:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76808 00:07:06.373 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76824 00:07:06.373 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76824 ']' 00:07:06.373 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76824 00:07:06.373 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:06.373 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.373 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76824 00:07:06.631 killing process with pid 76824 00:07:06.631 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.631 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.631 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76824' 00:07:06.631 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76824 00:07:06.631 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76824 00:07:06.890 ************************************ 00:07:06.890 END TEST locking_app_on_unlocked_coremask 00:07:06.890 ************************************ 00:07:06.890 00:07:06.890 real 0m3.716s 00:07:06.890 user 0m4.294s 00:07:06.890 sys 0m1.017s 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.890 02:53:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:06.890 02:53:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.890 02:53:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.890 02:53:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.890 ************************************ 00:07:06.890 START TEST locking_app_on_locked_coremask 00:07:06.890 ************************************ 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76892 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76892 /var/tmp/spdk.sock 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76892 ']' 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.890 02:53:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.890 [2024-05-14 02:53:52.846942] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:06.890 [2024-05-14 02:53:52.847112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76892 ] 00:07:07.150 [2024-05-14 02:53:52.994661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.150 [2024-05-14 02:53:53.010475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.150 [2024-05-14 02:53:53.042736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76903 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76903 /var/tmp/spdk2.sock 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76903 /var/tmp/spdk2.sock 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:07.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76903 /var/tmp/spdk2.sock 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76903 ']' 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.717 02:53:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.974 [2024-05-14 02:53:53.815220] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:07.974 [2024-05-14 02:53:53.815390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76903 ] 00:07:07.974 [2024-05-14 02:53:53.966540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.974 [2024-05-14 02:53:53.986173] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76892 has claimed it. 00:07:07.974 [2024-05-14 02:53:53.986240] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.541 ERROR: process (pid: 76903) is no longer running 00:07:08.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76903) - No such process 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76892 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76892 00:07:08.541 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76892 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76892 ']' 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76892 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76892 00:07:09.138 killing process with pid 76892 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76892' 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76892 00:07:09.138 02:53:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76892 00:07:09.138 ************************************ 00:07:09.138 END TEST locking_app_on_locked_coremask 00:07:09.138 ************************************ 00:07:09.138 00:07:09.138 real 0m2.403s 00:07:09.138 user 0m2.801s 00:07:09.138 sys 0m0.629s 00:07:09.138 02:53:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.138 02:53:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.397 02:53:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:09.397 02:53:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.397 02:53:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.397 02:53:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.397 ************************************ 00:07:09.397 START TEST locking_overlapped_coremask 00:07:09.397 ************************************ 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76951 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76951 /var/tmp/spdk.sock 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76951 ']' 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.397 02:53:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.397 [2024-05-14 02:53:55.304556] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:09.397 [2024-05-14 02:53:55.305419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76951 ] 00:07:09.655 [2024-05-14 02:53:55.460469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.655 [2024-05-14 02:53:55.480091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.655 [2024-05-14 02:53:55.513637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.655 [2024-05-14 02:53:55.513706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.655 [2024-05-14 02:53:55.513773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76969 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76969 /var/tmp/spdk2.sock 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76969 /var/tmp/spdk2.sock 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76969 /var/tmp/spdk2.sock 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76969 ']' 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.223 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.482 [2024-05-14 02:53:56.355185] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:10.482 [2024-05-14 02:53:56.355863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76969 ] 00:07:10.482 [2024-05-14 02:53:56.505500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.740 [2024-05-14 02:53:56.534165] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76951 has claimed it. 00:07:10.740 [2024-05-14 02:53:56.534256] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.998 ERROR: process (pid: 76969) is no longer running 00:07:10.998 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76969) - No such process 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76951 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 76951 ']' 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 76951 00:07:10.998 02:53:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76951 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:10.998 killing process with pid 76951 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76951' 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 76951 00:07:10.998 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 76951 00:07:11.566 00:07:11.566 real 0m2.110s 00:07:11.566 user 0m5.832s 00:07:11.566 sys 0m0.475s 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.566 ************************************ 00:07:11.566 END TEST locking_overlapped_coremask 00:07:11.566 ************************************ 00:07:11.566 02:53:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:11.566 02:53:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.566 02:53:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.566 02:53:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.566 ************************************ 00:07:11.566 START TEST locking_overlapped_coremask_via_rpc 00:07:11.566 ************************************ 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=77011 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 77011 /var/tmp/spdk.sock 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 77011 ']' 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.566 02:53:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.566 [2024-05-14 02:53:57.459130] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:11.566 [2024-05-14 02:53:57.459311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77011 ] 00:07:11.825 [2024-05-14 02:53:57.594623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.825 [2024-05-14 02:53:57.612933] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.825 [2024-05-14 02:53:57.613037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.825 [2024-05-14 02:53:57.649280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.825 [2024-05-14 02:53:57.649359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.825 [2024-05-14 02:53:57.649442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=77029 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 77029 /var/tmp/spdk2.sock 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 77029 ']' 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.391 02:53:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.391 [2024-05-14 02:53:58.408554] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:12.391 [2024-05-14 02:53:58.409133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77029 ] 00:07:12.650 [2024-05-14 02:53:58.549504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.650 [2024-05-14 02:53:58.579272] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.650 [2024-05-14 02:53:58.579328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.650 [2024-05-14 02:53:58.648244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.650 [2024-05-14 02:53:58.651287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.650 [2024-05-14 02:53:58.651365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.585 [2024-05-14 02:53:59.350325] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 77011 has claimed it. 00:07:13.585 request: 00:07:13.585 { 00:07:13.585 "method": "framework_enable_cpumask_locks", 00:07:13.585 "req_id": 1 00:07:13.585 } 00:07:13.585 Got JSON-RPC error response 00:07:13.585 response: 00:07:13.585 { 00:07:13.585 "code": -32603, 00:07:13.585 "message": "Failed to claim CPU core: 2" 00:07:13.585 } 00:07:13.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 77011 /var/tmp/spdk.sock 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 77011 ']' 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.585 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 77029 /var/tmp/spdk2.sock 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 77029 ']' 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.843 00:07:13.843 real 0m2.483s 00:07:13.843 user 0m1.241s 00:07:13.843 sys 0m0.168s 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.843 02:53:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.843 ************************************ 00:07:13.843 END TEST locking_overlapped_coremask_via_rpc 00:07:13.843 ************************************ 00:07:14.102 02:53:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.102 02:53:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 77011 ]] 00:07:14.102 02:53:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 77011 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 77011 ']' 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 77011 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77011 00:07:14.102 killing process with pid 77011 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77011' 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 77011 00:07:14.102 02:53:59 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 77011 00:07:14.361 02:54:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 77029 ]] 00:07:14.361 02:54:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 77029 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 77029 ']' 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 77029 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77029 00:07:14.361 killing process with pid 77029 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77029' 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 77029 00:07:14.361 02:54:00 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 77029 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.620 Process with pid 77011 is not found 00:07:14.620 Process with pid 77029 is not found 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 77011 ]] 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 77011 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 77011 ']' 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 77011 00:07:14.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (77011) - No such process 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 77011 is not found' 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 77029 ]] 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 77029 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 77029 ']' 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 77029 00:07:14.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (77029) - No such process 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 77029 is not found' 00:07:14.620 02:54:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.620 ************************************ 00:07:14.620 END TEST cpu_locks 00:07:14.620 ************************************ 00:07:14.620 00:07:14.620 real 0m18.721s 00:07:14.620 user 0m33.184s 00:07:14.620 sys 0m4.945s 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.620 02:54:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.620 00:07:14.620 real 0m45.940s 00:07:14.620 user 1m29.587s 00:07:14.620 sys 0m8.449s 00:07:14.620 02:54:00 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.620 02:54:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.620 ************************************ 00:07:14.620 END TEST event 00:07:14.620 ************************************ 00:07:14.620 02:54:00 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:14.620 02:54:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.620 02:54:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.620 02:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:14.620 ************************************ 00:07:14.620 START TEST thread 00:07:14.621 ************************************ 00:07:14.621 02:54:00 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:14.880 * Looking for test storage... 00:07:14.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:14.880 02:54:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.880 02:54:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:14.880 02:54:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.880 02:54:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.880 ************************************ 00:07:14.880 START TEST thread_poller_perf 00:07:14.880 ************************************ 00:07:14.880 02:54:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.880 [2024-05-14 02:54:00.740876] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:14.880 [2024-05-14 02:54:00.741069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77152 ] 00:07:14.880 [2024-05-14 02:54:00.880996] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.880 [2024-05-14 02:54:00.904143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.139 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:15.139 [2024-05-14 02:54:00.939760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.074 ====================================== 00:07:16.074 busy:2210774908 (cyc) 00:07:16.074 total_run_count: 323000 00:07:16.074 tsc_hz: 2200000000 (cyc) 00:07:16.074 ====================================== 00:07:16.074 poller_cost: 6844 (cyc), 3110 (nsec) 00:07:16.074 00:07:16.074 real 0m1.305s 00:07:16.074 user 0m1.132s 00:07:16.074 sys 0m0.067s 00:07:16.074 02:54:02 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.074 ************************************ 00:07:16.074 02:54:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.074 END TEST thread_poller_perf 00:07:16.074 ************************************ 00:07:16.074 02:54:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.074 02:54:02 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:16.074 02:54:02 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.074 02:54:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.074 ************************************ 00:07:16.074 START TEST thread_poller_perf 00:07:16.074 ************************************ 00:07:16.074 02:54:02 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.074 [2024-05-14 02:54:02.094177] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:16.074 [2024-05-14 02:54:02.094397] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77190 ] 00:07:16.333 [2024-05-14 02:54:02.229934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.333 [2024-05-14 02:54:02.245833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.333 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.333 [2024-05-14 02:54:02.280412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.708 ====================================== 00:07:17.708 busy:2203497840 (cyc) 00:07:17.708 total_run_count: 4293000 00:07:17.708 tsc_hz: 2200000000 (cyc) 00:07:17.708 ====================================== 00:07:17.708 poller_cost: 513 (cyc), 233 (nsec) 00:07:17.708 00:07:17.708 real 0m1.290s 00:07:17.708 user 0m1.107s 00:07:17.708 sys 0m0.076s 00:07:17.708 02:54:03 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.708 02:54:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.708 ************************************ 00:07:17.708 END TEST thread_poller_perf 00:07:17.708 ************************************ 00:07:17.708 02:54:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:17.708 00:07:17.708 real 0m2.775s 00:07:17.708 user 0m2.308s 00:07:17.708 sys 0m0.243s 00:07:17.708 02:54:03 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.708 02:54:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.708 ************************************ 00:07:17.708 END TEST thread 00:07:17.708 ************************************ 00:07:17.708 02:54:03 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:17.708 02:54:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.708 02:54:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.708 02:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:17.708 ************************************ 00:07:17.708 START TEST accel 00:07:17.708 ************************************ 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:17.708 * Looking for test storage... 00:07:17.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:17.708 02:54:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:17.708 02:54:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:17.708 02:54:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.708 02:54:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=77260 00:07:17.708 02:54:03 accel -- accel/accel.sh@63 -- # waitforlisten 77260 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@827 -- # '[' -z 77260 ']' 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.708 02:54:03 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:17.708 02:54:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.708 02:54:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.708 02:54:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.708 02:54:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.708 02:54:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.708 02:54:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.708 02:54:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.708 02:54:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:17.708 02:54:03 accel -- accel/accel.sh@41 -- # jq -r . 00:07:17.708 [2024-05-14 02:54:03.636050] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:17.709 [2024-05-14 02:54:03.636286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77260 ] 00:07:17.966 [2024-05-14 02:54:03.784035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.966 [2024-05-14 02:54:03.806978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.966 [2024-05-14 02:54:03.843720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.533 02:54:04 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.533 02:54:04 accel -- common/autotest_common.sh@860 -- # return 0 00:07:18.533 02:54:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:18.533 02:54:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:18.533 02:54:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:18.533 02:54:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:18.533 02:54:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:18.533 02:54:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:18.533 02:54:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:18.533 02:54:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.533 02:54:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.792 02:54:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.792 02:54:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.792 02:54:04 accel -- accel/accel.sh@75 -- # killprocess 77260 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@946 -- # '[' -z 77260 ']' 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@950 -- # kill -0 77260 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@951 -- # uname 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77260 00:07:18.792 killing process with pid 77260 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77260' 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@965 -- # kill 77260 00:07:18.792 02:54:04 accel -- common/autotest_common.sh@970 -- # wait 77260 00:07:19.050 02:54:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:19.050 02:54:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:19.050 02:54:04 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:19.050 02:54:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.050 02:54:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.050 02:54:04 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:19.050 02:54:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:19.050 02:54:05 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.050 02:54:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:19.050 02:54:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:19.050 02:54:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:19.050 02:54:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.050 02:54:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.050 ************************************ 00:07:19.050 START TEST accel_missing_filename 00:07:19.050 ************************************ 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.050 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:19.050 02:54:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:19.309 [2024-05-14 02:54:05.113797] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:19.309 [2024-05-14 02:54:05.114008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77319 ] 00:07:19.309 [2024-05-14 02:54:05.262832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.309 [2024-05-14 02:54:05.282893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.309 [2024-05-14 02:54:05.315759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.569 [2024-05-14 02:54:05.347535] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.569 [2024-05-14 02:54:05.396413] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:19.569 A filename is required. 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.569 00:07:19.569 real 0m0.420s 00:07:19.569 user 0m0.238s 00:07:19.569 sys 0m0.128s 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.569 ************************************ 00:07:19.569 END TEST accel_missing_filename 00:07:19.569 ************************************ 00:07:19.569 02:54:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:19.569 02:54:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.569 02:54:05 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:19.569 02:54:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.569 02:54:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.569 ************************************ 00:07:19.569 START TEST accel_compress_verify 00:07:19.569 ************************************ 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.569 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.569 02:54:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.569 [2024-05-14 02:54:05.592496] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:19.569 [2024-05-14 02:54:05.592751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77339 ] 00:07:19.828 [2024-05-14 02:54:05.742734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.828 [2024-05-14 02:54:05.761918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.828 [2024-05-14 02:54:05.798683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.828 [2024-05-14 02:54:05.831996] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.087 [2024-05-14 02:54:05.882997] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:20.087 00:07:20.087 Compression does not support the verify option, aborting. 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.087 00:07:20.087 real 0m0.421s 00:07:20.087 user 0m0.234s 00:07:20.087 sys 0m0.136s 00:07:20.087 ************************************ 00:07:20.087 END TEST accel_compress_verify 00:07:20.087 ************************************ 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.087 02:54:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.087 02:54:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:20.087 02:54:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:20.087 02:54:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.087 02:54:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.087 ************************************ 00:07:20.087 START TEST accel_wrong_workload 00:07:20.087 ************************************ 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:20.087 02:54:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:20.087 Unsupported workload type: foobar 00:07:20.087 [2024-05-14 02:54:06.054743] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:20.087 accel_perf options: 00:07:20.087 [-h help message] 00:07:20.087 [-q queue depth per core] 00:07:20.087 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.087 [-T number of threads per core 00:07:20.087 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.087 [-t time in seconds] 00:07:20.087 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.087 [ dif_verify, , dif_generate, dif_generate_copy 00:07:20.087 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.087 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.087 [-S for crc32c workload, use this seed value (default 0) 00:07:20.087 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.087 [-f for fill workload, use this BYTE value (default 255) 00:07:20.087 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.087 [-y verify result if this switch is on] 00:07:20.087 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.087 Can be used to spread operations across a wider range of memory. 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.087 00:07:20.087 real 0m0.073s 00:07:20.087 user 0m0.081s 00:07:20.087 sys 0m0.036s 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.087 02:54:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:20.087 ************************************ 00:07:20.087 END TEST accel_wrong_workload 00:07:20.087 ************************************ 00:07:20.346 02:54:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.346 02:54:06 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:20.346 02:54:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.346 02:54:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.346 ************************************ 00:07:20.346 START TEST accel_negative_buffers 00:07:20.346 ************************************ 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:20.346 02:54:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:20.346 -x option must be non-negative. 00:07:20.346 [2024-05-14 02:54:06.188652] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:20.346 accel_perf options: 00:07:20.346 [-h help message] 00:07:20.346 [-q queue depth per core] 00:07:20.346 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.346 [-T number of threads per core 00:07:20.346 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.346 [-t time in seconds] 00:07:20.346 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.346 [ dif_verify, , dif_generate, dif_generate_copy 00:07:20.346 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.346 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.346 [-S for crc32c workload, use this seed value (default 0) 00:07:20.346 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.346 [-f for fill workload, use this BYTE value (default 255) 00:07:20.346 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.346 [-y verify result if this switch is on] 00:07:20.346 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.346 Can be used to spread operations across a wider range of memory. 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.346 00:07:20.346 real 0m0.075s 00:07:20.346 user 0m0.085s 00:07:20.346 sys 0m0.037s 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.346 02:54:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:20.346 ************************************ 00:07:20.346 END TEST accel_negative_buffers 00:07:20.346 ************************************ 00:07:20.346 02:54:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:20.346 02:54:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:20.346 02:54:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.346 02:54:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.346 ************************************ 00:07:20.346 START TEST accel_crc32c 00:07:20.346 ************************************ 00:07:20.346 02:54:06 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:20.346 02:54:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:20.346 02:54:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:20.346 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.346 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:20.347 02:54:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:20.347 [2024-05-14 02:54:06.321390] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:20.347 [2024-05-14 02:54:06.321559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77406 ] 00:07:20.605 [2024-05-14 02:54:06.467950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.605 [2024-05-14 02:54:06.486233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.605 [2024-05-14 02:54:06.520979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.605 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.606 02:54:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:22.038 02:54:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.038 00:07:22.038 real 0m1.412s 00:07:22.038 user 0m1.185s 00:07:22.038 sys 0m0.137s 00:07:22.038 02:54:07 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.038 ************************************ 00:07:22.038 END TEST accel_crc32c 00:07:22.038 ************************************ 00:07:22.038 02:54:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:22.038 02:54:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:22.038 02:54:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:22.038 02:54:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.038 02:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.038 ************************************ 00:07:22.038 START TEST accel_crc32c_C2 00:07:22.038 ************************************ 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.038 02:54:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:22.038 [2024-05-14 02:54:07.789458] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:22.038 [2024-05-14 02:54:07.789675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77436 ] 00:07:22.038 [2024-05-14 02:54:07.935922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.038 [2024-05-14 02:54:07.956081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.038 [2024-05-14 02:54:07.992984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.297 02:54:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.233 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.234 00:07:23.234 real 0m1.414s 00:07:23.234 user 0m1.196s 00:07:23.234 sys 0m0.126s 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.234 02:54:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 ************************************ 00:07:23.234 END TEST accel_crc32c_C2 00:07:23.234 ************************************ 00:07:23.234 02:54:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:23.234 02:54:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:23.234 02:54:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.234 02:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 ************************************ 00:07:23.234 START TEST accel_copy 00:07:23.234 ************************************ 00:07:23.234 02:54:09 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:23.234 02:54:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:23.234 [2024-05-14 02:54:09.242935] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:23.234 [2024-05-14 02:54:09.243086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77477 ] 00:07:23.493 [2024-05-14 02:54:09.378932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.493 [2024-05-14 02:54:09.395561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.493 [2024-05-14 02:54:09.430354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.493 02:54:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.200 02:54:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.200 02:54:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.200 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.200 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.200 02:54:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:25.201 02:54:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.201 00:07:25.201 real 0m1.381s 00:07:25.201 user 0m1.171s 00:07:25.201 sys 0m0.119s 00:07:25.201 02:54:10 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.201 02:54:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.201 ************************************ 00:07:25.201 END TEST accel_copy 00:07:25.201 ************************************ 00:07:25.201 02:54:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.201 02:54:10 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.201 02:54:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.201 02:54:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.201 ************************************ 00:07:25.201 START TEST accel_fill 00:07:25.201 ************************************ 00:07:25.201 02:54:10 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:25.201 [2024-05-14 02:54:10.693104] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:25.201 [2024-05-14 02:54:10.693299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77507 ] 00:07:25.201 [2024-05-14 02:54:10.839631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.201 [2024-05-14 02:54:10.862782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.201 [2024-05-14 02:54:10.906507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.201 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.202 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.202 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.202 02:54:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.202 02:54:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.202 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.202 02:54:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:26.137 02:54:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.137 00:07:26.137 real 0m1.441s 00:07:26.137 user 0m0.016s 00:07:26.137 sys 0m0.001s 00:07:26.137 02:54:12 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.137 ************************************ 00:07:26.137 END TEST accel_fill 00:07:26.137 ************************************ 00:07:26.137 02:54:12 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:26.137 02:54:12 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.137 02:54:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:26.137 02:54:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.137 02:54:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.137 ************************************ 00:07:26.137 START TEST accel_copy_crc32c 00:07:26.137 ************************************ 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:26.137 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:26.396 [2024-05-14 02:54:12.186412] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:26.396 [2024-05-14 02:54:12.186601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77543 ] 00:07:26.396 [2024-05-14 02:54:12.333822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.396 [2024-05-14 02:54:12.354108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.396 [2024-05-14 02:54:12.391011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.654 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.655 02:54:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.591 00:07:27.591 real 0m1.415s 00:07:27.591 user 0m1.201s 00:07:27.591 sys 0m0.123s 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.591 02:54:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 ************************************ 00:07:27.591 END TEST accel_copy_crc32c 00:07:27.591 ************************************ 00:07:27.591 02:54:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:27.591 02:54:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:27.591 02:54:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.591 02:54:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 ************************************ 00:07:27.591 START TEST accel_copy_crc32c_C2 00:07:27.591 ************************************ 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:27.591 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.849 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:27.849 [2024-05-14 02:54:13.661827] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:27.849 [2024-05-14 02:54:13.662028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77578 ] 00:07:27.849 [2024-05-14 02:54:13.809870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.850 [2024-05-14 02:54:13.834124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.108 [2024-05-14 02:54:13.881049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 02:54:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.045 ************************************ 00:07:29.045 END TEST accel_copy_crc32c_C2 00:07:29.045 ************************************ 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.045 00:07:29.045 real 0m1.431s 00:07:29.045 user 0m0.012s 00:07:29.045 sys 0m0.004s 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.045 02:54:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:29.304 02:54:15 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.304 02:54:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:29.304 02:54:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.304 02:54:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.304 ************************************ 00:07:29.304 START TEST accel_dualcast 00:07:29.304 ************************************ 00:07:29.304 02:54:15 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:29.304 02:54:15 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:29.304 [2024-05-14 02:54:15.144095] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:29.304 [2024-05-14 02:54:15.144272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77608 ] 00:07:29.304 [2024-05-14 02:54:15.290510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.304 [2024-05-14 02:54:15.310971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.563 [2024-05-14 02:54:15.346937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.563 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.563 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.563 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.563 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.564 02:54:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.497 02:54:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.498 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.498 02:54:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.498 ************************************ 00:07:30.498 END TEST accel_dualcast 00:07:30.498 ************************************ 00:07:30.498 02:54:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.498 02:54:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:30.498 02:54:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.498 00:07:30.498 real 0m1.413s 00:07:30.498 user 0m0.011s 00:07:30.498 sys 0m0.004s 00:07:30.498 02:54:16 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.498 02:54:16 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:30.756 02:54:16 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:30.756 02:54:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:30.756 02:54:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.756 02:54:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.756 ************************************ 00:07:30.756 START TEST accel_compare 00:07:30.756 ************************************ 00:07:30.756 02:54:16 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.756 02:54:16 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:30.757 02:54:16 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:30.757 [2024-05-14 02:54:16.603169] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:30.757 [2024-05-14 02:54:16.603360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77649 ] 00:07:30.757 [2024-05-14 02:54:16.748669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.757 [2024-05-14 02:54:16.767850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.016 [2024-05-14 02:54:16.801374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.016 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.017 02:54:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.952 02:54:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.953 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.953 02:54:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.953 02:54:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.953 02:54:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:31.953 02:54:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.953 00:07:31.953 real 0m1.402s 00:07:31.953 user 0m1.189s 00:07:31.953 sys 0m0.121s 00:07:31.953 ************************************ 00:07:31.953 END TEST accel_compare 00:07:31.953 ************************************ 00:07:31.953 02:54:17 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.953 02:54:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:32.212 02:54:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:32.212 02:54:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:32.212 02:54:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.212 02:54:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.212 ************************************ 00:07:32.212 START TEST accel_xor 00:07:32.212 ************************************ 00:07:32.212 02:54:18 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:32.212 02:54:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:32.212 [2024-05-14 02:54:18.055081] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:32.212 [2024-05-14 02:54:18.055298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77679 ] 00:07:32.212 [2024-05-14 02:54:18.205311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.212 [2024-05-14 02:54:18.226893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.471 [2024-05-14 02:54:18.267800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.471 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.472 02:54:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:33.409 02:54:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.409 00:07:33.409 real 0m1.417s 00:07:33.409 user 0m0.011s 00:07:33.409 sys 0m0.003s 00:07:33.409 02:54:19 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.409 ************************************ 00:07:33.409 END TEST accel_xor 00:07:33.409 ************************************ 00:07:33.409 02:54:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:33.668 02:54:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:33.668 02:54:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:33.668 02:54:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.668 02:54:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.668 ************************************ 00:07:33.668 START TEST accel_xor 00:07:33.668 ************************************ 00:07:33.668 02:54:19 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:33.668 02:54:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:33.668 [2024-05-14 02:54:19.532004] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:33.668 [2024-05-14 02:54:19.532299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77715 ] 00:07:33.668 [2024-05-14 02:54:19.680462] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.927 [2024-05-14 02:54:19.700901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.927 [2024-05-14 02:54:19.738890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:33.927 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.928 02:54:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:34.868 02:54:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.868 00:07:34.868 real 0m1.409s 00:07:34.868 user 0m1.189s 00:07:34.868 sys 0m0.124s 00:07:34.868 02:54:20 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.868 02:54:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:34.868 ************************************ 00:07:34.868 END TEST accel_xor 00:07:34.868 ************************************ 00:07:35.127 02:54:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:35.127 02:54:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:35.127 02:54:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.127 02:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.127 ************************************ 00:07:35.127 START TEST accel_dif_verify 00:07:35.127 ************************************ 00:07:35.127 02:54:20 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:35.127 02:54:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:35.127 02:54:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:35.127 02:54:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.127 02:54:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:35.127 02:54:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.128 02:54:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.128 [2024-05-14 02:54:20.991308] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:35.128 [2024-05-14 02:54:20.991522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77750 ] 00:07:35.128 [2024-05-14 02:54:21.139666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.387 [2024-05-14 02:54:21.160930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.387 [2024-05-14 02:54:21.200746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.387 02:54:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.764 02:54:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.764 00:07:36.764 real 0m1.435s 00:07:36.764 user 0m1.208s 00:07:36.764 sys 0m0.134s 00:07:36.764 02:54:22 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.764 02:54:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.764 ************************************ 00:07:36.764 END TEST accel_dif_verify 00:07:36.764 ************************************ 00:07:36.764 02:54:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.764 02:54:22 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:36.764 02:54:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.764 02:54:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.764 ************************************ 00:07:36.764 START TEST accel_dif_generate 00:07:36.764 ************************************ 00:07:36.764 02:54:22 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.764 [2024-05-14 02:54:22.472997] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:36.764 [2024-05-14 02:54:22.473271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77786 ] 00:07:36.764 [2024-05-14 02:54:22.610958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.764 [2024-05-14 02:54:22.628448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.764 [2024-05-14 02:54:22.662812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.765 02:54:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:38.148 02:54:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.148 00:07:38.148 real 0m1.400s 00:07:38.148 user 0m0.018s 00:07:38.148 sys 0m0.001s 00:07:38.148 02:54:23 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.148 ************************************ 00:07:38.148 END TEST accel_dif_generate 00:07:38.148 ************************************ 00:07:38.148 02:54:23 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:38.148 02:54:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.148 02:54:23 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:38.148 02:54:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.148 02:54:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.148 ************************************ 00:07:38.148 START TEST accel_dif_generate_copy 00:07:38.148 ************************************ 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.148 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.149 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.149 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.149 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.149 02:54:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.149 [2024-05-14 02:54:23.924464] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:38.149 [2024-05-14 02:54:23.924704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77821 ] 00:07:38.149 [2024-05-14 02:54:24.071510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.149 [2024-05-14 02:54:24.090414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.149 [2024-05-14 02:54:24.129067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.149 02:54:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.527 00:07:39.527 real 0m1.404s 00:07:39.527 user 0m1.182s 00:07:39.527 sys 0m0.131s 00:07:39.527 ************************************ 00:07:39.527 END TEST accel_dif_generate_copy 00:07:39.527 ************************************ 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.527 02:54:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.527 02:54:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:39.527 02:54:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.527 02:54:25 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:39.527 02:54:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.527 02:54:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.527 ************************************ 00:07:39.527 START TEST accel_comp 00:07:39.527 ************************************ 00:07:39.527 02:54:25 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:39.527 02:54:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:39.527 [2024-05-14 02:54:25.382472] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:39.527 [2024-05-14 02:54:25.382704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77851 ] 00:07:39.527 [2024-05-14 02:54:25.530696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.527 [2024-05-14 02:54:25.550981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.785 [2024-05-14 02:54:25.589343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.785 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.786 02:54:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.160 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:41.161 02:54:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.161 00:07:41.161 real 0m1.420s 00:07:41.161 user 0m1.189s 00:07:41.161 sys 0m0.143s 00:07:41.161 ************************************ 00:07:41.161 END TEST accel_comp 00:07:41.161 ************************************ 00:07:41.161 02:54:26 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.161 02:54:26 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:41.161 02:54:26 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.161 02:54:26 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:41.161 02:54:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.161 02:54:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.161 ************************************ 00:07:41.161 START TEST accel_decomp 00:07:41.161 ************************************ 00:07:41.161 02:54:26 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.161 02:54:26 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:41.161 [2024-05-14 02:54:26.861414] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:41.161 [2024-05-14 02:54:26.861622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77887 ] 00:07:41.161 [2024-05-14 02:54:27.009754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.161 [2024-05-14 02:54:27.030605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.161 [2024-05-14 02:54:27.064344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.161 02:54:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 ************************************ 00:07:42.538 END TEST accel_decomp 00:07:42.538 ************************************ 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.538 02:54:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.538 00:07:42.538 real 0m1.421s 00:07:42.538 user 0m1.210s 00:07:42.538 sys 0m0.120s 00:07:42.538 02:54:28 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.538 02:54:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:42.538 02:54:28 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.538 02:54:28 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:42.538 02:54:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.538 02:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.538 ************************************ 00:07:42.538 START TEST accel_decmop_full 00:07:42.538 ************************************ 00:07:42.538 02:54:28 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:42.538 02:54:28 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:42.538 [2024-05-14 02:54:28.345073] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:42.538 [2024-05-14 02:54:28.345598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77922 ] 00:07:42.539 [2024-05-14 02:54:28.505094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.539 [2024-05-14 02:54:28.525667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.539 [2024-05-14 02:54:28.562051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.797 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.798 02:54:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.815 02:54:29 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.815 00:07:43.815 real 0m1.457s 00:07:43.815 user 0m0.018s 00:07:43.815 sys 0m0.003s 00:07:43.815 02:54:29 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.815 02:54:29 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:43.815 ************************************ 00:07:43.815 END TEST accel_decmop_full 00:07:43.815 ************************************ 00:07:43.815 02:54:29 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.815 02:54:29 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:43.815 02:54:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.815 02:54:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.815 ************************************ 00:07:43.815 START TEST accel_decomp_mcore 00:07:43.815 ************************************ 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:43.815 02:54:29 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:44.075 [2024-05-14 02:54:29.841787] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:44.075 [2024-05-14 02:54:29.841993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77958 ] 00:07:44.075 [2024-05-14 02:54:29.992295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.075 [2024-05-14 02:54:30.010150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.075 [2024-05-14 02:54:30.049260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.075 [2024-05-14 02:54:30.049409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.075 [2024-05-14 02:54:30.049842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.075 [2024-05-14 02:54:30.049893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.075 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.334 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.335 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.335 02:54:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.271 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.272 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.272 02:54:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.272 00:07:45.272 real 0m1.438s 00:07:45.272 user 0m0.024s 00:07:45.272 sys 0m0.002s 00:07:45.272 02:54:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.272 ************************************ 00:07:45.272 END TEST accel_decomp_mcore 00:07:45.272 ************************************ 00:07:45.272 02:54:31 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:45.272 02:54:31 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.272 02:54:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:45.272 02:54:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.272 02:54:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.272 ************************************ 00:07:45.272 START TEST accel_decomp_full_mcore 00:07:45.272 ************************************ 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:45.272 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:45.532 [2024-05-14 02:54:31.334050] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:45.532 [2024-05-14 02:54:31.334265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77996 ] 00:07:45.532 [2024-05-14 02:54:31.482729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.532 [2024-05-14 02:54:31.500580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.532 [2024-05-14 02:54:31.539920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.532 [2024-05-14 02:54:31.540074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.532 [2024-05-14 02:54:31.540190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.532 [2024-05-14 02:54:31.540315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.791 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.792 02:54:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.728 ************************************ 00:07:46.728 END TEST accel_decomp_full_mcore 00:07:46.728 ************************************ 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.728 00:07:46.728 real 0m1.436s 00:07:46.728 user 0m0.017s 00:07:46.728 sys 0m0.006s 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.728 02:54:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.987 02:54:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.987 02:54:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:46.987 02:54:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.987 02:54:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.987 ************************************ 00:07:46.987 START TEST accel_decomp_mthread 00:07:46.987 ************************************ 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.987 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.988 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.988 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:46.988 02:54:32 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:46.988 [2024-05-14 02:54:32.812997] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:46.988 [2024-05-14 02:54:32.813373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78035 ] 00:07:46.988 [2024-05-14 02:54:32.958968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.988 [2024-05-14 02:54:32.977193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.249 [2024-05-14 02:54:33.017344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.249 02:54:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.191 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.192 00:07:48.192 real 0m1.421s 00:07:48.192 user 0m1.183s 00:07:48.192 sys 0m0.144s 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.192 02:54:34 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:48.192 ************************************ 00:07:48.192 END TEST accel_decomp_mthread 00:07:48.192 ************************************ 00:07:48.451 02:54:34 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.451 02:54:34 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:48.451 02:54:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.451 02:54:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.451 ************************************ 00:07:48.451 START TEST accel_decomp_full_mthread 00:07:48.451 ************************************ 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.451 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.451 [2024-05-14 02:54:34.298332] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:48.451 [2024-05-14 02:54:34.298651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78065 ] 00:07:48.451 [2024-05-14 02:54:34.449311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.451 [2024-05-14 02:54:34.470936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.710 [2024-05-14 02:54:34.506900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:48.710 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.711 02:54:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.087 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.088 00:07:50.088 real 0m1.455s 00:07:50.088 user 0m1.224s 00:07:50.088 sys 0m0.140s 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.088 02:54:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:50.088 ************************************ 00:07:50.088 END TEST accel_decomp_full_mthread 00:07:50.088 ************************************ 00:07:50.088 02:54:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:50.088 02:54:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.088 02:54:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:50.088 02:54:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.088 02:54:35 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.088 02:54:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.088 02:54:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.088 02:54:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.088 02:54:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.088 02:54:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.088 02:54:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.088 02:54:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:50.088 02:54:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:50.088 ************************************ 00:07:50.088 START TEST accel_dif_functional_tests 00:07:50.088 ************************************ 00:07:50.088 02:54:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.088 [2024-05-14 02:54:35.838948] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:50.088 [2024-05-14 02:54:35.839140] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78107 ] 00:07:50.088 [2024-05-14 02:54:35.989540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.088 [2024-05-14 02:54:36.008193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.088 [2024-05-14 02:54:36.043853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.088 [2024-05-14 02:54:36.044001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.088 [2024-05-14 02:54:36.043919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.088 00:07:50.088 00:07:50.088 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.088 http://cunit.sourceforge.net/ 00:07:50.088 00:07:50.088 00:07:50.088 Suite: accel_dif 00:07:50.088 Test: verify: DIF generated, GUARD check ...passed 00:07:50.088 Test: verify: DIF generated, APPTAG check ...passed 00:07:50.088 Test: verify: DIF generated, REFTAG check ...passed 00:07:50.088 Test: verify: DIF not generated, GUARD check ...passed 00:07:50.088 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 02:54:36.097223] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.088 [2024-05-14 02:54:36.097329] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.088 passed 00:07:50.088 Test: verify: DIF not generated, REFTAG check ...[2024-05-14 02:54:36.097386] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.088 [2024-05-14 02:54:36.097507] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.088 [2024-05-14 02:54:36.097573] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.088 passed 00:07:50.088 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:50.088 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-14 02:54:36.097682] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.088 [2024-05-14 02:54:36.097779] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:50.088 passed 00:07:50.088 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:50.088 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:50.088 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:50.088 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:50.088 Test: generate copy: DIF generated, GUARD check ...[2024-05-14 02:54:36.098117] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:50.088 passed 00:07:50.088 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:50.088 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:50.088 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:50.088 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:50.088 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:50.088 Test: generate copy: iovecs-len validate ...passed 00:07:50.088 Test: generate copy: buffer alignment validate ...passed 00:07:50.088 00:07:50.088 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.088 suites 1 1 n/a 0 0 00:07:50.088 tests 20 20 20 0 0 00:07:50.088 asserts 204 204 204 0 n/a 00:07:50.088 00:07:50.088 Elapsed time = 0.005 seconds 00:07:50.088 [2024-05-14 02:54:36.098675] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:50.346 00:07:50.346 real 0m0.523s 00:07:50.346 user 0m0.526s 00:07:50.346 sys 0m0.186s 00:07:50.346 ************************************ 00:07:50.346 END TEST accel_dif_functional_tests 00:07:50.346 02:54:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.346 02:54:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:50.346 ************************************ 00:07:50.346 00:07:50.346 real 0m32.872s 00:07:50.346 user 0m33.724s 00:07:50.346 sys 0m4.377s 00:07:50.346 ************************************ 00:07:50.346 END TEST accel 00:07:50.346 ************************************ 00:07:50.346 02:54:36 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.347 02:54:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.347 02:54:36 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:50.347 02:54:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:50.347 02:54:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.347 02:54:36 -- common/autotest_common.sh@10 -- # set +x 00:07:50.347 ************************************ 00:07:50.347 START TEST accel_rpc 00:07:50.347 ************************************ 00:07:50.347 02:54:36 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:50.605 * Looking for test storage... 00:07:50.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:50.605 02:54:36 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.605 02:54:36 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=78171 00:07:50.605 02:54:36 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:50.605 02:54:36 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 78171 00:07:50.605 02:54:36 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 78171 ']' 00:07:50.605 02:54:36 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.605 02:54:36 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.605 02:54:36 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.605 02:54:36 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.605 02:54:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.605 [2024-05-14 02:54:36.567591] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:50.605 [2024-05-14 02:54:36.567822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78171 ] 00:07:50.863 [2024-05-14 02:54:36.716469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.863 [2024-05-14 02:54:36.732904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.863 [2024-05-14 02:54:36.768228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.431 02:54:37 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.431 02:54:37 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:51.431 02:54:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:51.431 02:54:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:51.431 02:54:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:51.431 02:54:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:51.431 02:54:37 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:51.431 02:54:37 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.431 02:54:37 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.431 02:54:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.431 ************************************ 00:07:51.431 START TEST accel_assign_opcode 00:07:51.431 ************************************ 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.431 [2024-05-14 02:54:37.453113] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:51.431 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.689 [2024-05-14 02:54:37.461185] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.689 software 00:07:51.689 ************************************ 00:07:51.689 END TEST accel_assign_opcode 00:07:51.689 ************************************ 00:07:51.689 00:07:51.689 real 0m0.213s 00:07:51.689 user 0m0.056s 00:07:51.689 sys 0m0.011s 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.689 02:54:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.689 02:54:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 78171 00:07:51.689 02:54:37 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 78171 ']' 00:07:51.690 02:54:37 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 78171 00:07:51.690 02:54:37 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:51.690 02:54:37 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.690 02:54:37 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78171 00:07:51.948 02:54:37 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.948 killing process with pid 78171 00:07:51.948 02:54:37 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.948 02:54:37 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78171' 00:07:51.948 02:54:37 accel_rpc -- common/autotest_common.sh@965 -- # kill 78171 00:07:51.948 02:54:37 accel_rpc -- common/autotest_common.sh@970 -- # wait 78171 00:07:52.207 ************************************ 00:07:52.207 END TEST accel_rpc 00:07:52.207 ************************************ 00:07:52.207 00:07:52.207 real 0m1.651s 00:07:52.207 user 0m1.748s 00:07:52.207 sys 0m0.396s 00:07:52.207 02:54:38 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.207 02:54:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.207 02:54:38 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.207 02:54:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.207 02:54:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.207 02:54:38 -- common/autotest_common.sh@10 -- # set +x 00:07:52.207 ************************************ 00:07:52.207 START TEST app_cmdline 00:07:52.207 ************************************ 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.207 * Looking for test storage... 00:07:52.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:52.207 02:54:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:52.207 02:54:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=78261 00:07:52.207 02:54:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 78261 00:07:52.207 02:54:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 78261 ']' 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.207 02:54:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.466 [2024-05-14 02:54:38.265640] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:52.466 [2024-05-14 02:54:38.265840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78261 ] 00:07:52.466 [2024-05-14 02:54:38.417565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.466 [2024-05-14 02:54:38.438303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.466 [2024-05-14 02:54:38.473108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.402 02:54:39 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.402 02:54:39 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:53.402 { 00:07:53.402 "version": "SPDK v24.05-pre git sha1 1826c4dc5", 00:07:53.402 "fields": { 00:07:53.402 "major": 24, 00:07:53.402 "minor": 5, 00:07:53.402 "patch": 0, 00:07:53.402 "suffix": "-pre", 00:07:53.402 "commit": "1826c4dc5" 00:07:53.402 } 00:07:53.402 } 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.402 02:54:39 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.402 02:54:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:53.402 02:54:39 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.402 02:54:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.402 02:54:39 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:53.403 02:54:39 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.403 02:54:39 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.403 02:54:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.403 02:54:39 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.403 02:54:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.403 02:54:39 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.661 request: 00:07:53.661 { 00:07:53.661 "method": "env_dpdk_get_mem_stats", 00:07:53.661 "req_id": 1 00:07:53.661 } 00:07:53.661 Got JSON-RPC error response 00:07:53.661 response: 00:07:53.661 { 00:07:53.661 "code": -32601, 00:07:53.661 "message": "Method not found" 00:07:53.661 } 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.661 02:54:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 78261 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 78261 ']' 00:07:53.661 02:54:39 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 78261 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78261 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.920 killing process with pid 78261 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78261' 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@965 -- # kill 78261 00:07:53.920 02:54:39 app_cmdline -- common/autotest_common.sh@970 -- # wait 78261 00:07:54.178 00:07:54.178 real 0m1.916s 00:07:54.178 user 0m2.426s 00:07:54.178 sys 0m0.414s 00:07:54.178 02:54:39 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.178 02:54:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.178 ************************************ 00:07:54.178 END TEST app_cmdline 00:07:54.178 ************************************ 00:07:54.178 02:54:40 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.178 02:54:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:54.178 02:54:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.178 02:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:54.178 ************************************ 00:07:54.178 START TEST version 00:07:54.178 ************************************ 00:07:54.178 02:54:40 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.178 * Looking for test storage... 00:07:54.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.178 02:54:40 version -- app/version.sh@17 -- # get_header_version major 00:07:54.179 02:54:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.179 02:54:40 version -- app/version.sh@17 -- # major=24 00:07:54.179 02:54:40 version -- app/version.sh@18 -- # get_header_version minor 00:07:54.179 02:54:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.179 02:54:40 version -- app/version.sh@18 -- # minor=5 00:07:54.179 02:54:40 version -- app/version.sh@19 -- # get_header_version patch 00:07:54.179 02:54:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.179 02:54:40 version -- app/version.sh@19 -- # patch=0 00:07:54.179 02:54:40 version -- app/version.sh@20 -- # get_header_version suffix 00:07:54.179 02:54:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.179 02:54:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.179 02:54:40 version -- app/version.sh@20 -- # suffix=-pre 00:07:54.179 02:54:40 version -- app/version.sh@22 -- # version=24.5 00:07:54.179 02:54:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:54.179 02:54:40 version -- app/version.sh@28 -- # version=24.5rc0 00:07:54.179 02:54:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:54.179 02:54:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:54.179 02:54:40 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:54.179 02:54:40 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:54.179 00:07:54.179 real 0m0.156s 00:07:54.179 user 0m0.101s 00:07:54.179 sys 0m0.091s 00:07:54.179 02:54:40 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.179 02:54:40 version -- common/autotest_common.sh@10 -- # set +x 00:07:54.179 ************************************ 00:07:54.179 END TEST version 00:07:54.179 ************************************ 00:07:54.438 02:54:40 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:54.438 02:54:40 -- spdk/autotest.sh@194 -- # uname -s 00:07:54.438 02:54:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:54.438 02:54:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:54.438 02:54:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:54.438 02:54:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:54.438 02:54:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:54.438 02:54:40 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:54.438 02:54:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.438 02:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:54.438 ************************************ 00:07:54.438 START TEST blockdev_nvme 00:07:54.438 ************************************ 00:07:54.438 02:54:40 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:54.438 * Looking for test storage... 00:07:54.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:54.438 02:54:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78406 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:54.438 02:54:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 78406 00:07:54.438 02:54:40 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 78406 ']' 00:07:54.438 02:54:40 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.438 02:54:40 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.439 02:54:40 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.439 02:54:40 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.439 02:54:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.439 [2024-05-14 02:54:40.458790] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:54.439 [2024-05-14 02:54:40.459483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78406 ] 00:07:54.698 [2024-05-14 02:54:40.598183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.698 [2024-05-14 02:54:40.622123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.698 [2024-05-14 02:54:40.664681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.635 02:54:41 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.635 02:54:41 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:07:55.635 02:54:41 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:55.635 02:54:41 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:07:55.635 02:54:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:55.635 02:54:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:55.635 02:54:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.635 02:54:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:55.635 02:54:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.635 02:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.894 02:54:41 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.894 02:54:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:07:55.894 02:54:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.894 02:54:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.894 02:54:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.894 02:54:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b67f4f49-0ae6-4361-9a8f-11efbf832e76"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b67f4f49-0ae6-4361-9a8f-11efbf832e76",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e752b0f7-6684-4721-bf06-07f952d5235f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e752b0f7-6684-4721-bf06-07f952d5235f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5da51476-c568-4f93-988a-7bd1d7b88d3f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5da51476-c568-4f93-988a-7bd1d7b88d3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "113b2818-ebdc-493b-8c13-31b6a6d71d7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "113b2818-ebdc-493b-8c13-31b6a6d71d7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "75d85acb-1bb5-4a0f-bd31-b60fb5f5683d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "75d85acb-1bb5-4a0f-bd31-b60fb5f5683d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "292ac1fa-ea57-475d-ac8c-5cc5cb160622"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "292ac1fa-ea57-475d-ac8c-5cc5cb160622",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:55.895 02:54:41 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 78406 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 78406 ']' 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 78406 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:55.895 02:54:41 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78406 00:07:56.154 02:54:41 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:56.154 02:54:41 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:56.154 killing process with pid 78406 00:07:56.154 02:54:41 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78406' 00:07:56.154 02:54:41 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 78406 00:07:56.154 02:54:41 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 78406 00:07:56.413 02:54:42 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.413 02:54:42 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.413 02:54:42 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:56.413 02:54:42 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.413 02:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.413 ************************************ 00:07:56.413 START TEST bdev_hello_world 00:07:56.413 ************************************ 00:07:56.413 02:54:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.413 [2024-05-14 02:54:42.310653] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:56.413 [2024-05-14 02:54:42.310831] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78479 ] 00:07:56.673 [2024-05-14 02:54:42.458481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.673 [2024-05-14 02:54:42.478721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.673 [2024-05-14 02:54:42.514164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.932 [2024-05-14 02:54:42.868975] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:56.932 [2024-05-14 02:54:42.869029] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:56.932 [2024-05-14 02:54:42.869092] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:56.932 [2024-05-14 02:54:42.871291] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:56.932 [2024-05-14 02:54:42.871703] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:56.932 [2024-05-14 02:54:42.871736] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:56.932 [2024-05-14 02:54:42.872057] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:56.932 00:07:56.932 [2024-05-14 02:54:42.872122] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:57.191 00:07:57.191 real 0m0.823s 00:07:57.191 user 0m0.534s 00:07:57.191 sys 0m0.184s 00:07:57.191 02:54:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.191 ************************************ 00:07:57.191 END TEST bdev_hello_world 00:07:57.191 ************************************ 00:07:57.191 02:54:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:57.191 02:54:43 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:57.191 02:54:43 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:57.191 02:54:43 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.191 02:54:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.191 ************************************ 00:07:57.191 START TEST bdev_bounds 00:07:57.191 ************************************ 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:07:57.191 Process bdevio pid: 78510 00:07:57.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=78510 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 78510' 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 78510 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 78510 ']' 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.191 02:54:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:57.191 [2024-05-14 02:54:43.197495] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:57.191 [2024-05-14 02:54:43.197691] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78510 ] 00:07:57.451 [2024-05-14 02:54:43.349674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.451 [2024-05-14 02:54:43.368785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.451 [2024-05-14 02:54:43.406809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.451 [2024-05-14 02:54:43.406877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.451 [2024-05-14 02:54:43.406926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.386 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.386 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:07:58.386 02:54:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:58.386 I/O targets: 00:07:58.386 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:58.386 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:58.386 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.386 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.386 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.386 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:58.386 00:07:58.386 00:07:58.386 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.386 http://cunit.sourceforge.net/ 00:07:58.386 00:07:58.386 00:07:58.386 Suite: bdevio tests on: Nvme3n1 00:07:58.386 Test: blockdev write read block ...passed 00:07:58.386 Test: blockdev write zeroes read block ...passed 00:07:58.386 Test: blockdev write zeroes read no split ...passed 00:07:58.386 Test: blockdev write zeroes read split ...passed 00:07:58.386 Test: blockdev write zeroes read split partial ...passed 00:07:58.386 Test: blockdev reset ...[2024-05-14 02:54:44.226324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:58.386 passed 00:07:58.386 Test: blockdev write read 8 blocks ...[2024-05-14 02:54:44.228720] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.386 passed 00:07:58.386 Test: blockdev write read size > 128k ...passed 00:07:58.386 Test: blockdev write read invalid size ...passed 00:07:58.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.386 Test: blockdev write read max offset ...passed 00:07:58.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.386 Test: blockdev writev readv 8 blocks ...passed 00:07:58.386 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.386 Test: blockdev writev readv block ...passed 00:07:58.386 Test: blockdev writev readv size > 128k ...passed 00:07:58.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.386 Test: blockdev comparev and writev ...[2024-05-14 02:54:44.235988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e0e000 len:0x1000 00:07:58.386 [2024-05-14 02:54:44.236064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.386 passed 00:07:58.386 Test: blockdev nvme passthru rw ...passed 00:07:58.386 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:54:44.237038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.386 [2024-05-14 02:54:44.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.386 passed 00:07:58.386 Test: blockdev nvme admin passthru ...passed 00:07:58.386 Test: blockdev copy ...passed 00:07:58.386 Suite: bdevio tests on: Nvme2n3 00:07:58.386 Test: blockdev write read block ...passed 00:07:58.386 Test: blockdev write zeroes read block ...passed 00:07:58.386 Test: blockdev write zeroes read no split ...passed 00:07:58.386 Test: blockdev write zeroes read split ...passed 00:07:58.386 Test: blockdev write zeroes read split partial ...passed 00:07:58.386 Test: blockdev reset ...[2024-05-14 02:54:44.260391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.386 passed 00:07:58.386 Test: blockdev write read 8 blocks ...[2024-05-14 02:54:44.262822] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.386 passed 00:07:58.386 Test: blockdev write read size > 128k ...passed 00:07:58.386 Test: blockdev write read invalid size ...passed 00:07:58.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.386 Test: blockdev write read max offset ...passed 00:07:58.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.386 Test: blockdev writev readv 8 blocks ...passed 00:07:58.386 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.386 Test: blockdev writev readv block ...passed 00:07:58.386 Test: blockdev writev readv size > 128k ...passed 00:07:58.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.386 Test: blockdev comparev and writev ...[2024-05-14 02:54:44.269299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e08000 len:0x1000 00:07:58.386 [2024-05-14 02:54:44.269351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.386 passed 00:07:58.386 Test: blockdev nvme passthru rw ...passed 00:07:58.386 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:54:44.270206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.386 [2024-05-14 02:54:44.270247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.386 passed 00:07:58.386 Test: blockdev nvme admin passthru ...passed 00:07:58.386 Test: blockdev copy ...passed 00:07:58.386 Suite: bdevio tests on: Nvme2n2 00:07:58.386 Test: blockdev write read block ...passed 00:07:58.386 Test: blockdev write zeroes read block ...passed 00:07:58.386 Test: blockdev write zeroes read no split ...passed 00:07:58.386 Test: blockdev write zeroes read split ...passed 00:07:58.386 Test: blockdev write zeroes read split partial ...passed 00:07:58.386 Test: blockdev reset ...[2024-05-14 02:54:44.281815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.386 passed 00:07:58.386 Test: blockdev write read 8 blocks ...[2024-05-14 02:54:44.283978] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.386 passed 00:07:58.386 Test: blockdev write read size > 128k ...passed 00:07:58.386 Test: blockdev write read invalid size ...passed 00:07:58.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.386 Test: blockdev write read max offset ...passed 00:07:58.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.386 Test: blockdev writev readv 8 blocks ...passed 00:07:58.386 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.386 Test: blockdev writev readv block ...passed 00:07:58.386 Test: blockdev writev readv size > 128k ...passed 00:07:58.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.386 Test: blockdev comparev and writev ...[2024-05-14 02:54:44.290277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e04000 len:0x1000 00:07:58.386 [2024-05-14 02:54:44.290327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.386 passed 00:07:58.386 Test: blockdev nvme passthru rw ...passed 00:07:58.386 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:54:44.291075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.386 [2024-05-14 02:54:44.291120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.386 passed 00:07:58.386 Test: blockdev nvme admin passthru ...passed 00:07:58.386 Test: blockdev copy ...passed 00:07:58.386 Suite: bdevio tests on: Nvme2n1 00:07:58.386 Test: blockdev write read block ...passed 00:07:58.387 Test: blockdev write zeroes read block ...passed 00:07:58.387 Test: blockdev write zeroes read no split ...passed 00:07:58.387 Test: blockdev write zeroes read split ...passed 00:07:58.387 Test: blockdev write zeroes read split partial ...passed 00:07:58.387 Test: blockdev reset ...[2024-05-14 02:54:44.303479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.387 passed 00:07:58.387 Test: blockdev write read 8 blocks ...[2024-05-14 02:54:44.305466] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.387 passed 00:07:58.387 Test: blockdev write read size > 128k ...passed 00:07:58.387 Test: blockdev write read invalid size ...passed 00:07:58.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.387 Test: blockdev write read max offset ...passed 00:07:58.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.387 Test: blockdev writev readv 8 blocks ...passed 00:07:58.387 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.387 Test: blockdev writev readv block ...passed 00:07:58.387 Test: blockdev writev readv size > 128k ...passed 00:07:58.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.387 Test: blockdev comparev and writev ...[2024-05-14 02:54:44.311642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e04000 len:0x1000 00:07:58.387 [2024-05-14 02:54:44.311696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.387 passed 00:07:58.387 Test: blockdev nvme passthru rw ...passed 00:07:58.387 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.387 Test: blockdev nvme admin passthru ...[2024-05-14 02:54:44.312594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.387 [2024-05-14 02:54:44.312646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.387 passed 00:07:58.387 Test: blockdev copy ...passed 00:07:58.387 Suite: bdevio tests on: Nvme1n1 00:07:58.387 Test: blockdev write read block ...passed 00:07:58.387 Test: blockdev write zeroes read block ...passed 00:07:58.387 Test: blockdev write zeroes read no split ...passed 00:07:58.387 Test: blockdev write zeroes read split ...passed 00:07:58.387 Test: blockdev write zeroes read split partial ...passed 00:07:58.387 Test: blockdev reset ...[2024-05-14 02:54:44.324944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:58.387 passed 00:07:58.387 Test: blockdev write read 8 blocks ...[2024-05-14 02:54:44.326896] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.387 passed 00:07:58.387 Test: blockdev write read size > 128k ...passed 00:07:58.387 Test: blockdev write read invalid size ...passed 00:07:58.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.387 Test: blockdev write read max offset ...passed 00:07:58.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.387 Test: blockdev writev readv 8 blocks ...passed 00:07:58.387 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.387 Test: blockdev writev readv block ...passed 00:07:58.387 Test: blockdev writev readv size > 128k ...passed 00:07:58.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.387 Test: blockdev comparev and writev ...[2024-05-14 02:54:44.333398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de80e000 len:0x1000 00:07:58.387 [2024-05-14 02:54:44.333465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.387 passed 00:07:58.387 Test: blockdev nvme passthru rw ...passed 00:07:58.387 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.387 Test: blockdev nvme admin passthru ...[2024-05-14 02:54:44.334351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.387 [2024-05-14 02:54:44.334410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.387 passed 00:07:58.387 Test: blockdev copy ...passed 00:07:58.387 Suite: bdevio tests on: Nvme0n1 00:07:58.387 Test: blockdev write read block ...passed 00:07:58.387 Test: blockdev write zeroes read block ...passed 00:07:58.387 Test: blockdev write zeroes read no split ...passed 00:07:58.387 Test: blockdev write zeroes read split ...passed 00:07:58.387 Test: blockdev write zeroes read split partial ...passed 00:07:58.387 Test: blockdev reset ...[2024-05-14 02:54:44.346840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:58.387 passed 00:07:58.387 Test: blockdev write read 8 blocks ...[2024-05-14 02:54:44.348841] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.387 passed 00:07:58.387 Test: blockdev write read size > 128k ...passed 00:07:58.387 Test: blockdev write read invalid size ...passed 00:07:58.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.387 Test: blockdev write read max offset ...passed 00:07:58.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.387 Test: blockdev writev readv 8 blocks ...passed 00:07:58.387 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.387 Test: blockdev writev readv block ...passed 00:07:58.387 Test: blockdev writev readv size > 128k ...passed 00:07:58.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.387 Test: blockdev comparev and writev ...passed 00:07:58.387 Test: blockdev nvme passthru rw ...[2024-05-14 02:54:44.354233] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:58.387 separate metadata which is not supported yet. 00:07:58.387 passed 00:07:58.387 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.387 Test: blockdev nvme admin passthru ...[2024-05-14 02:54:44.354741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:58.387 [2024-05-14 02:54:44.354793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:58.387 passed 00:07:58.387 Test: blockdev copy ...passed 00:07:58.387 00:07:58.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.387 suites 6 6 n/a 0 0 00:07:58.387 tests 138 138 138 0 0 00:07:58.387 asserts 893 893 893 0 n/a 00:07:58.387 00:07:58.387 Elapsed time = 0.329 seconds 00:07:58.387 0 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 78510 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 78510 ']' 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 78510 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78510 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78510' 00:07:58.387 killing process with pid 78510 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 78510 00:07:58.387 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 78510 00:07:58.646 02:54:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:58.646 00:07:58.646 real 0m1.466s 00:07:58.646 user 0m3.663s 00:07:58.646 sys 0m0.335s 00:07:58.646 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.646 02:54:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:58.646 ************************************ 00:07:58.646 END TEST bdev_bounds 00:07:58.646 ************************************ 00:07:58.646 02:54:44 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:58.646 02:54:44 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:58.646 02:54:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.646 02:54:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.646 ************************************ 00:07:58.646 START TEST bdev_nbd 00:07:58.646 ************************************ 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=78553 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:58.646 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 78553 /var/tmp/spdk-nbd.sock 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 78553 ']' 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:58.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.647 02:54:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:58.905 [2024-05-14 02:54:44.717203] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:07:58.905 [2024-05-14 02:54:44.717429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.905 [2024-05-14 02:54:44.862463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.905 [2024-05-14 02:54:44.887341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.163 [2024-05-14 02:54:44.934534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.729 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.059 1+0 records in 00:08:00.059 1+0 records out 00:08:00.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439969 s, 9.3 MB/s 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.059 02:54:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.336 1+0 records in 00:08:00.336 1+0 records out 00:08:00.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701065 s, 5.8 MB/s 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.336 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.594 1+0 records in 00:08:00.594 1+0 records out 00:08:00.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061348 s, 6.7 MB/s 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.594 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.852 1+0 records in 00:08:00.852 1+0 records out 00:08:00.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630962 s, 6.5 MB/s 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.852 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:01.111 02:54:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.111 1+0 records in 00:08:01.111 1+0 records out 00:08:01.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000754724 s, 5.4 MB/s 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.111 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.370 1+0 records in 00:08:01.370 1+0 records out 00:08:01.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744671 s, 5.5 MB/s 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.370 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.628 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd0", 00:08:01.628 "bdev_name": "Nvme0n1" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd1", 00:08:01.628 "bdev_name": "Nvme1n1" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd2", 00:08:01.628 "bdev_name": "Nvme2n1" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd3", 00:08:01.628 "bdev_name": "Nvme2n2" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd4", 00:08:01.628 "bdev_name": "Nvme2n3" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd5", 00:08:01.628 "bdev_name": "Nvme3n1" 00:08:01.628 } 00:08:01.628 ]' 00:08:01.628 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:01.628 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd0", 00:08:01.628 "bdev_name": "Nvme0n1" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd1", 00:08:01.628 "bdev_name": "Nvme1n1" 00:08:01.628 }, 00:08:01.628 { 00:08:01.628 "nbd_device": "/dev/nbd2", 00:08:01.629 "bdev_name": "Nvme2n1" 00:08:01.629 }, 00:08:01.629 { 00:08:01.629 "nbd_device": "/dev/nbd3", 00:08:01.629 "bdev_name": "Nvme2n2" 00:08:01.629 }, 00:08:01.629 { 00:08:01.629 "nbd_device": "/dev/nbd4", 00:08:01.629 "bdev_name": "Nvme2n3" 00:08:01.629 }, 00:08:01.629 { 00:08:01.629 "nbd_device": "/dev/nbd5", 00:08:01.629 "bdev_name": "Nvme3n1" 00:08:01.629 } 00:08:01.629 ]' 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.629 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.887 02:54:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:02.145 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:02.145 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:02.145 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.146 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.404 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.663 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.921 02:54:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.180 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.439 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:03.698 /dev/nbd0 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.698 1+0 records in 00:08:03.698 1+0 records out 00:08:03.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036836 s, 11.1 MB/s 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.698 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:03.957 /dev/nbd1 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.957 1+0 records in 00:08:03.957 1+0 records out 00:08:03.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689457 s, 5.9 MB/s 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.957 02:54:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:04.215 /dev/nbd10 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.215 1+0 records in 00:08:04.215 1+0 records out 00:08:04.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595951 s, 6.9 MB/s 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.215 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:04.781 /dev/nbd11 00:08:04.781 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:04.781 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.782 1+0 records in 00:08:04.782 1+0 records out 00:08:04.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831059 s, 4.9 MB/s 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.782 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:05.040 /dev/nbd12 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:05.040 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.040 1+0 records in 00:08:05.040 1+0 records out 00:08:05.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000915335 s, 4.5 MB/s 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.041 02:54:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:05.041 /dev/nbd13 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.299 1+0 records in 00:08:05.299 1+0 records out 00:08:05.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872992 s, 4.7 MB/s 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.299 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:05.299 { 00:08:05.300 "nbd_device": "/dev/nbd0", 00:08:05.300 "bdev_name": "Nvme0n1" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd1", 00:08:05.300 "bdev_name": "Nvme1n1" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd10", 00:08:05.300 "bdev_name": "Nvme2n1" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd11", 00:08:05.300 "bdev_name": "Nvme2n2" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd12", 00:08:05.300 "bdev_name": "Nvme2n3" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd13", 00:08:05.300 "bdev_name": "Nvme3n1" 00:08:05.300 } 00:08:05.300 ]' 00:08:05.300 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd0", 00:08:05.300 "bdev_name": "Nvme0n1" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd1", 00:08:05.300 "bdev_name": "Nvme1n1" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd10", 00:08:05.300 "bdev_name": "Nvme2n1" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd11", 00:08:05.300 "bdev_name": "Nvme2n2" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd12", 00:08:05.300 "bdev_name": "Nvme2n3" 00:08:05.300 }, 00:08:05.300 { 00:08:05.300 "nbd_device": "/dev/nbd13", 00:08:05.300 "bdev_name": "Nvme3n1" 00:08:05.300 } 00:08:05.300 ]' 00:08:05.300 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:05.559 /dev/nbd1 00:08:05.559 /dev/nbd10 00:08:05.559 /dev/nbd11 00:08:05.559 /dev/nbd12 00:08:05.559 /dev/nbd13' 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:05.559 /dev/nbd1 00:08:05.559 /dev/nbd10 00:08:05.559 /dev/nbd11 00:08:05.559 /dev/nbd12 00:08:05.559 /dev/nbd13' 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:05.559 256+0 records in 00:08:05.559 256+0 records out 00:08:05.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010429 s, 101 MB/s 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:05.559 256+0 records in 00:08:05.559 256+0 records out 00:08:05.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167366 s, 6.3 MB/s 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.559 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:05.818 256+0 records in 00:08:05.818 256+0 records out 00:08:05.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148101 s, 7.1 MB/s 00:08:05.818 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.818 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:06.077 256+0 records in 00:08:06.077 256+0 records out 00:08:06.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180672 s, 5.8 MB/s 00:08:06.077 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.077 02:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:06.077 256+0 records in 00:08:06.077 256+0 records out 00:08:06.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15238 s, 6.9 MB/s 00:08:06.077 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.077 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:06.336 256+0 records in 00:08:06.336 256+0 records out 00:08:06.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171923 s, 6.1 MB/s 00:08:06.336 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.336 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:06.637 256+0 records in 00:08:06.637 256+0 records out 00:08:06.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176229 s, 6.0 MB/s 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:06.637 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.638 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.897 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.155 02:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.156 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.414 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.672 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.930 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.188 02:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:08.446 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:08.704 malloc_lvol_verify 00:08:08.704 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:08.963 fc8e7611-641c-438c-a01d-7c53fc053ada 00:08:08.963 02:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:09.221 a84e7ce2-f877-4cb6-b0eb-2f3ff26da414 00:08:09.222 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:09.480 /dev/nbd0 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:09.480 mke2fs 1.46.5 (30-Dec-2021) 00:08:09.480 Discarding device blocks: 0/4096 done 00:08:09.480 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:09.480 00:08:09.480 Allocating group tables: 0/1 done 00:08:09.480 Writing inode tables: 0/1 done 00:08:09.480 Creating journal (1024 blocks): done 00:08:09.480 Writing superblocks and filesystem accounting information: 0/1 done 00:08:09.480 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.480 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 78553 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 78553 ']' 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 78553 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78553 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:09.738 killing process with pid 78553 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78553' 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 78553 00:08:09.738 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 78553 00:08:09.997 02:54:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:09.997 00:08:09.997 real 0m11.190s 00:08:09.997 user 0m16.163s 00:08:09.997 sys 0m3.855s 00:08:09.997 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.997 ************************************ 00:08:09.997 END TEST bdev_nbd 00:08:09.997 ************************************ 00:08:09.997 02:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:09.997 02:54:55 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:09.997 02:54:55 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:09.997 skipping fio tests on NVMe due to multi-ns failures. 00:08:09.997 02:54:55 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:09.997 02:54:55 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.998 02:54:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.998 02:54:55 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:09.998 02:54:55 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.998 02:54:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.998 ************************************ 00:08:09.998 START TEST bdev_verify 00:08:09.998 ************************************ 00:08:09.998 02:54:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.998 [2024-05-14 02:54:55.942716] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:09.998 [2024-05-14 02:54:55.942887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78943 ] 00:08:10.257 [2024-05-14 02:54:56.079304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.257 [2024-05-14 02:54:56.097428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.257 [2024-05-14 02:54:56.132536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.257 [2024-05-14 02:54:56.132609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.515 Running I/O for 5 seconds... 00:08:15.810 00:08:15.810 Latency(us) 00:08:15.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.810 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.810 Verification LBA range: start 0x0 length 0xbd0bd 00:08:15.810 Nvme0n1 : 5.07 1553.11 6.07 0.00 0.00 82090.29 10247.45 75306.82 00:08:15.810 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.810 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:15.810 Nvme0n1 : 5.07 1615.58 6.31 0.00 0.00 79059.44 17158.52 71493.82 00:08:15.810 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.810 Verification LBA range: start 0x0 length 0xa0000 00:08:15.811 Nvme1n1 : 5.07 1552.47 6.06 0.00 0.00 82003.80 10485.76 71493.82 00:08:15.811 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0xa0000 length 0xa0000 00:08:15.811 Nvme1n1 : 5.07 1614.21 6.31 0.00 0.00 78936.20 18588.39 69110.69 00:08:15.811 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x0 length 0x80000 00:08:15.811 Nvme2n1 : 5.07 1551.15 6.06 0.00 0.00 81864.86 12332.68 69587.32 00:08:15.811 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x80000 length 0x80000 00:08:15.811 Nvme2n1 : 5.08 1612.84 6.30 0.00 0.00 78790.14 20018.27 67204.19 00:08:15.811 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x0 length 0x80000 00:08:15.811 Nvme2n2 : 5.08 1549.83 6.05 0.00 0.00 81770.82 14596.65 67680.81 00:08:15.811 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x80000 length 0x80000 00:08:15.811 Nvme2n2 : 5.08 1611.51 6.29 0.00 0.00 78683.17 20375.74 64344.44 00:08:15.811 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x0 length 0x80000 00:08:15.811 Nvme2n3 : 5.09 1558.73 6.09 0.00 0.00 81384.78 6464.23 71970.44 00:08:15.811 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x80000 length 0x80000 00:08:15.811 Nvme2n3 : 5.09 1610.92 6.29 0.00 0.00 78563.97 13166.78 68157.44 00:08:15.811 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x0 length 0x20000 00:08:15.811 Nvme3n1 : 5.09 1558.18 6.09 0.00 0.00 81269.34 6762.12 75306.82 00:08:15.811 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.811 Verification LBA range: start 0x20000 length 0x20000 00:08:15.811 Nvme3n1 : 5.09 1610.27 6.29 0.00 0.00 78463.93 9115.46 71493.82 00:08:15.811 =================================================================================================================== 00:08:15.811 Total : 18998.82 74.21 0.00 0.00 80211.85 6464.23 75306.82 00:08:16.379 00:08:16.379 real 0m6.246s 00:08:16.379 user 0m11.674s 00:08:16.379 sys 0m0.212s 00:08:16.379 02:55:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.379 02:55:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.379 ************************************ 00:08:16.379 END TEST bdev_verify 00:08:16.379 ************************************ 00:08:16.379 02:55:02 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.379 02:55:02 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:16.379 02:55:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.379 02:55:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.379 ************************************ 00:08:16.379 START TEST bdev_verify_big_io 00:08:16.379 ************************************ 00:08:16.379 02:55:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.379 [2024-05-14 02:55:02.260856] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:16.379 [2024-05-14 02:55:02.261065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79030 ] 00:08:16.637 [2024-05-14 02:55:02.411919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.637 [2024-05-14 02:55:02.435682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.637 [2024-05-14 02:55:02.481938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.637 [2024-05-14 02:55:02.481985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.896 Running I/O for 5 seconds... 00:08:23.460 00:08:23.460 Latency(us) 00:08:23.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.460 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x0 length 0xbd0b 00:08:23.460 Nvme0n1 : 5.68 135.20 8.45 0.00 0.00 922410.05 22997.18 876990.84 00:08:23.460 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:23.460 Nvme0n1 : 5.61 133.22 8.33 0.00 0.00 925224.55 22043.93 957063.91 00:08:23.460 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x0 length 0xa000 00:08:23.460 Nvme1n1 : 5.74 134.67 8.42 0.00 0.00 896275.67 63391.19 991380.95 00:08:23.460 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0xa000 length 0xa000 00:08:23.460 Nvme1n1 : 5.78 136.94 8.56 0.00 0.00 873911.90 57195.05 983754.94 00:08:23.460 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x0 length 0x8000 00:08:23.460 Nvme2n1 : 5.69 135.09 8.44 0.00 0.00 878149.82 110577.11 850299.81 00:08:23.460 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x8000 length 0x8000 00:08:23.460 Nvme2n1 : 5.69 131.03 8.19 0.00 0.00 892413.12 74353.57 1487071.42 00:08:23.460 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x0 length 0x8000 00:08:23.460 Nvme2n2 : 5.74 137.75 8.61 0.00 0.00 838855.53 54335.30 876990.84 00:08:23.460 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x8000 length 0x8000 00:08:23.460 Nvme2n2 : 5.80 135.73 8.48 0.00 0.00 838589.48 81979.58 1509949.44 00:08:23.460 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x0 length 0x8000 00:08:23.460 Nvme2n3 : 5.78 143.93 9.00 0.00 0.00 786630.64 34078.72 899868.86 00:08:23.460 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.460 Verification LBA range: start 0x8000 length 0x8000 00:08:23.461 Nvme2n3 : 5.82 150.93 9.43 0.00 0.00 737858.29 21924.77 1121023.07 00:08:23.461 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.461 Verification LBA range: start 0x0 length 0x2000 00:08:23.461 Nvme3n1 : 5.79 154.73 9.67 0.00 0.00 713893.90 2740.60 903681.86 00:08:23.461 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.461 Verification LBA range: start 0x2000 length 0x2000 00:08:23.461 Nvme3n1 : 5.86 162.22 10.14 0.00 0.00 670096.63 2472.49 1593835.52 00:08:23.461 =================================================================================================================== 00:08:23.461 Total : 1691.43 105.71 0.00 0.00 825112.86 2472.49 1593835.52 00:08:23.461 00:08:23.461 real 0m7.227s 00:08:23.461 user 0m13.544s 00:08:23.461 sys 0m0.277s 00:08:23.461 02:55:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.461 02:55:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:23.461 ************************************ 00:08:23.461 END TEST bdev_verify_big_io 00:08:23.461 ************************************ 00:08:23.461 02:55:09 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.461 02:55:09 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:23.461 02:55:09 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.461 02:55:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.461 ************************************ 00:08:23.461 START TEST bdev_write_zeroes 00:08:23.461 ************************************ 00:08:23.461 02:55:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.718 [2024-05-14 02:55:09.537236] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:23.718 [2024-05-14 02:55:09.537445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79134 ] 00:08:23.718 [2024-05-14 02:55:09.687431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.718 [2024-05-14 02:55:09.708277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.718 [2024-05-14 02:55:09.744150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.284 Running I/O for 1 seconds... 00:08:25.218 00:08:25.218 Latency(us) 00:08:25.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.218 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.218 Nvme0n1 : 1.01 9043.42 35.33 0.00 0.00 14105.25 7387.69 26333.56 00:08:25.218 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.218 Nvme1n1 : 1.01 9031.38 35.28 0.00 0.00 14103.33 10724.07 20137.43 00:08:25.218 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.218 Nvme2n1 : 1.02 9072.15 35.44 0.00 0.00 14035.36 7268.54 17754.30 00:08:25.218 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.218 Nvme2n2 : 1.02 9061.55 35.40 0.00 0.00 13990.70 7477.06 18230.92 00:08:25.218 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.218 Nvme2n3 : 1.02 9051.62 35.36 0.00 0.00 13982.81 7626.01 17992.61 00:08:25.218 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.218 Nvme3n1 : 1.02 9041.85 35.32 0.00 0.00 13974.83 7626.01 17754.30 00:08:25.218 =================================================================================================================== 00:08:25.218 Total : 54301.98 212.12 0.00 0.00 14031.88 7268.54 26333.56 00:08:25.477 00:08:25.477 real 0m1.921s 00:08:25.477 user 0m1.593s 00:08:25.477 sys 0m0.213s 00:08:25.477 02:55:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.477 02:55:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:25.477 ************************************ 00:08:25.477 END TEST bdev_write_zeroes 00:08:25.477 ************************************ 00:08:25.477 02:55:11 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.477 02:55:11 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:25.477 02:55:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.477 02:55:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.477 ************************************ 00:08:25.477 START TEST bdev_json_nonenclosed 00:08:25.477 ************************************ 00:08:25.477 02:55:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.736 [2024-05-14 02:55:11.504677] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:25.736 [2024-05-14 02:55:11.504877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79171 ] 00:08:25.736 [2024-05-14 02:55:11.654546] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.736 [2024-05-14 02:55:11.678607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.736 [2024-05-14 02:55:11.722724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.736 [2024-05-14 02:55:11.722862] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:25.736 [2024-05-14 02:55:11.722903] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:25.736 [2024-05-14 02:55:11.722926] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.995 00:08:25.995 real 0m0.447s 00:08:25.995 user 0m0.219s 00:08:25.995 sys 0m0.123s 00:08:25.995 ************************************ 00:08:25.995 END TEST bdev_json_nonenclosed 00:08:25.995 ************************************ 00:08:25.995 02:55:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.995 02:55:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:25.995 02:55:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.995 02:55:11 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:25.995 02:55:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.995 02:55:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.995 ************************************ 00:08:25.995 START TEST bdev_json_nonarray 00:08:25.995 ************************************ 00:08:25.995 02:55:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.995 [2024-05-14 02:55:12.007359] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:25.995 [2024-05-14 02:55:12.007565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79196 ] 00:08:26.253 [2024-05-14 02:55:12.156657] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.253 [2024-05-14 02:55:12.179754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.253 [2024-05-14 02:55:12.224716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.253 [2024-05-14 02:55:12.224849] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:26.253 [2024-05-14 02:55:12.224894] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.253 [2024-05-14 02:55:12.224917] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.511 00:08:26.511 real 0m0.450s 00:08:26.511 user 0m0.220s 00:08:26.511 sys 0m0.125s 00:08:26.511 02:55:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.511 02:55:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:26.511 ************************************ 00:08:26.511 END TEST bdev_json_nonarray 00:08:26.511 ************************************ 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:26.511 02:55:12 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:26.511 00:08:26.511 real 0m32.152s 00:08:26.511 user 0m49.867s 00:08:26.511 sys 0m6.093s 00:08:26.511 02:55:12 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.511 02:55:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.511 ************************************ 00:08:26.511 END TEST blockdev_nvme 00:08:26.511 ************************************ 00:08:26.511 02:55:12 -- spdk/autotest.sh@209 -- # uname -s 00:08:26.511 02:55:12 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:26.511 02:55:12 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:26.511 02:55:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:26.511 02:55:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.511 02:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:26.511 ************************************ 00:08:26.511 START TEST blockdev_nvme_gpt 00:08:26.511 ************************************ 00:08:26.511 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:26.768 * Looking for test storage... 00:08:26.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79272 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 79272 00:08:26.768 02:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:26.768 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 79272 ']' 00:08:26.768 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.768 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:26.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.768 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.768 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:26.768 02:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.768 [2024-05-14 02:55:12.684056] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:26.768 [2024-05-14 02:55:12.684293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79272 ] 00:08:27.026 [2024-05-14 02:55:12.838264] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.026 [2024-05-14 02:55:12.863353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.026 [2024-05-14 02:55:12.908288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.980 02:55:13 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.980 02:55:13 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:08:27.980 02:55:13 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:27.980 02:55:13 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:27.980 02:55:13 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:28.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.238 Waiting for block devices as requested 00:08:28.238 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.495 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.495 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.495 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.761 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:33.762 BYT; 00:08:33.762 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:33.762 BYT; 00:08:33.762 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.762 02:55:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.762 02:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:08:34.698 The operation has completed successfully. 00:08:34.698 02:55:20 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:08:36.073 The operation has completed successfully. 00:08:36.073 02:55:21 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.899 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.899 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.899 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.899 02:55:22 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:08:36.899 02:55:22 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.899 02:55:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.158 [] 00:08:37.158 02:55:22 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.158 02:55:22 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:08:37.158 02:55:22 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:37.158 02:55:22 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:37.158 02:55:22 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.158 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:37.158 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.158 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:37.419 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:37.419 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "70be2e9c-bf5a-401e-b97e-c28cb3d1dbb2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "70be2e9c-bf5a-401e-b97e-c28cb3d1dbb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2f469ecd-92ed-40ef-870a-4571c5629456"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f469ecd-92ed-40 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:37.420 ef-870a-4571c5629456",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e835692d-95f7-400f-97f3-b00b4f64798b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e835692d-95f7-400f-97f3-b00b4f64798b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e1aa2d8c-b4b8-4477-9455-d4859090b30a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1aa2d8c-b4b8-4477-9455-d4859090b30a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "42b250c4-7a44-4a6d-898d-91d9ea23af5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "42b250c4-7a44-4a6d-898d-91d9ea23af5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:37.683 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:37.683 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:08:37.683 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:37.683 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 79272 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 79272 ']' 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 79272 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79272 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:37.683 killing process with pid 79272 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79272' 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 79272 00:08:37.683 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 79272 00:08:37.942 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:37.942 02:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:37.942 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:37.942 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.942 02:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.942 ************************************ 00:08:37.942 START TEST bdev_hello_world 00:08:37.942 ************************************ 00:08:37.942 02:55:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:37.942 [2024-05-14 02:55:23.935348] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:37.942 [2024-05-14 02:55:23.935548] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79878 ] 00:08:38.200 [2024-05-14 02:55:24.084514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.200 [2024-05-14 02:55:24.103771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.200 [2024-05-14 02:55:24.140013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.767 [2024-05-14 02:55:24.501197] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:38.767 [2024-05-14 02:55:24.501264] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:38.768 [2024-05-14 02:55:24.501291] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:38.768 [2024-05-14 02:55:24.503726] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:38.768 [2024-05-14 02:55:24.504239] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:38.768 [2024-05-14 02:55:24.504283] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:38.768 [2024-05-14 02:55:24.504495] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:38.768 00:08:38.768 [2024-05-14 02:55:24.504543] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:38.768 00:08:38.768 real 0m0.854s 00:08:38.768 user 0m0.558s 00:08:38.768 sys 0m0.191s 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:38.768 ************************************ 00:08:38.768 END TEST bdev_hello_world 00:08:38.768 ************************************ 00:08:38.768 02:55:24 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:38.768 02:55:24 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:38.768 02:55:24 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.768 02:55:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.768 ************************************ 00:08:38.768 START TEST bdev_bounds 00:08:38.768 ************************************ 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=79909 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:38.768 Process bdevio pid: 79909 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 79909' 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 79909 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 79909 ']' 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:38.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:38.768 02:55:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 [2024-05-14 02:55:24.844913] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:39.026 [2024-05-14 02:55:24.845154] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79909 ] 00:08:39.026 [2024-05-14 02:55:24.994726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.026 [2024-05-14 02:55:25.016093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.285 [2024-05-14 02:55:25.056071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.285 [2024-05-14 02:55:25.056256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.285 [2024-05-14 02:55:25.056197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.850 02:55:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:39.850 02:55:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:39.850 02:55:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:40.110 I/O targets: 00:08:40.110 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:40.110 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:40.110 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:40.110 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.110 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.110 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.110 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:40.110 00:08:40.110 00:08:40.110 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.110 http://cunit.sourceforge.net/ 00:08:40.110 00:08:40.110 00:08:40.110 Suite: bdevio tests on: Nvme3n1 00:08:40.110 Test: blockdev write read block ...passed 00:08:40.110 Test: blockdev write zeroes read block ...passed 00:08:40.110 Test: blockdev write zeroes read no split ...passed 00:08:40.110 Test: blockdev write zeroes read split ...passed 00:08:40.110 Test: blockdev write zeroes read split partial ...passed 00:08:40.110 Test: blockdev reset ...[2024-05-14 02:55:25.915616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:40.110 [2024-05-14 02:55:25.917718] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.110 passed 00:08:40.110 Test: blockdev write read 8 blocks ...passed 00:08:40.110 Test: blockdev write read size > 128k ...passed 00:08:40.110 Test: blockdev write read invalid size ...passed 00:08:40.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.110 Test: blockdev write read max offset ...passed 00:08:40.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.110 Test: blockdev writev readv 8 blocks ...passed 00:08:40.110 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.110 Test: blockdev writev readv block ...passed 00:08:40.110 Test: blockdev writev readv size > 128k ...passed 00:08:40.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.110 Test: blockdev comparev and writev ...[2024-05-14 02:55:25.923841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc04000 len:0x1000 00:08:40.110 [2024-05-14 02:55:25.923902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.110 passed 00:08:40.110 Test: blockdev nvme passthru rw ...passed 00:08:40.110 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:55:25.924741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.110 [2024-05-14 02:55:25.924785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.110 passed 00:08:40.110 Test: blockdev nvme admin passthru ...passed 00:08:40.110 Test: blockdev copy ...passed 00:08:40.110 Suite: bdevio tests on: Nvme2n3 00:08:40.110 Test: blockdev write read block ...passed 00:08:40.110 Test: blockdev write zeroes read block ...passed 00:08:40.110 Test: blockdev write zeroes read no split ...passed 00:08:40.110 Test: blockdev write zeroes read split ...passed 00:08:40.110 Test: blockdev write zeroes read split partial ...passed 00:08:40.110 Test: blockdev reset ...[2024-05-14 02:55:25.939014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.110 [2024-05-14 02:55:25.941359] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.110 passed 00:08:40.110 Test: blockdev write read 8 blocks ...passed 00:08:40.110 Test: blockdev write read size > 128k ...passed 00:08:40.110 Test: blockdev write read invalid size ...passed 00:08:40.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.110 Test: blockdev write read max offset ...passed 00:08:40.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.110 Test: blockdev writev readv 8 blocks ...passed 00:08:40.110 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.110 Test: blockdev writev readv block ...passed 00:08:40.110 Test: blockdev writev readv size > 128k ...passed 00:08:40.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.110 Test: blockdev comparev and writev ...[2024-05-14 02:55:25.947333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc04000 len:0x1000 00:08:40.110 [2024-05-14 02:55:25.947390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme passthru rw ...passed 00:08:40.111 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:55:25.948240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.111 [2024-05-14 02:55:25.948286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme admin passthru ...passed 00:08:40.111 Test: blockdev copy ...passed 00:08:40.111 Suite: bdevio tests on: Nvme2n2 00:08:40.111 Test: blockdev write read block ...passed 00:08:40.111 Test: blockdev write zeroes read block ...passed 00:08:40.111 Test: blockdev write zeroes read no split ...passed 00:08:40.111 Test: blockdev write zeroes read split ...passed 00:08:40.111 Test: blockdev write zeroes read split partial ...passed 00:08:40.111 Test: blockdev reset ...[2024-05-14 02:55:25.962340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.111 [2024-05-14 02:55:25.964611] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.111 passed 00:08:40.111 Test: blockdev write read 8 blocks ...passed 00:08:40.111 Test: blockdev write read size > 128k ...passed 00:08:40.111 Test: blockdev write read invalid size ...passed 00:08:40.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.111 Test: blockdev write read max offset ...passed 00:08:40.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.111 Test: blockdev writev readv 8 blocks ...passed 00:08:40.111 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.111 Test: blockdev writev readv block ...passed 00:08:40.111 Test: blockdev writev readv size > 128k ...passed 00:08:40.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.111 Test: blockdev comparev and writev ...[2024-05-14 02:55:25.970108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1422000 len:0x1000 00:08:40.111 [2024-05-14 02:55:25.970174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme passthru rw ...passed 00:08:40.111 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:55:25.970902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.111 [2024-05-14 02:55:25.970942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme admin passthru ...passed 00:08:40.111 Test: blockdev copy ...passed 00:08:40.111 Suite: bdevio tests on: Nvme2n1 00:08:40.111 Test: blockdev write read block ...passed 00:08:40.111 Test: blockdev write zeroes read block ...passed 00:08:40.111 Test: blockdev write zeroes read no split ...passed 00:08:40.111 Test: blockdev write zeroes read split ...passed 00:08:40.111 Test: blockdev write zeroes read split partial ...passed 00:08:40.111 Test: blockdev reset ...[2024-05-14 02:55:25.984823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.111 [2024-05-14 02:55:25.987228] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.111 passed 00:08:40.111 Test: blockdev write read 8 blocks ...passed 00:08:40.111 Test: blockdev write read size > 128k ...passed 00:08:40.111 Test: blockdev write read invalid size ...passed 00:08:40.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.111 Test: blockdev write read max offset ...passed 00:08:40.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.111 Test: blockdev writev readv 8 blocks ...passed 00:08:40.111 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.111 Test: blockdev writev readv block ...passed 00:08:40.111 Test: blockdev writev readv size > 128k ...passed 00:08:40.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.111 Test: blockdev comparev and writev ...[2024-05-14 02:55:25.992630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc0d000 len:0x1000 00:08:40.111 [2024-05-14 02:55:25.992686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme passthru rw ...passed 00:08:40.111 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:55:25.993398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme admin passthru ...[2024-05-14 02:55:25.993440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev copy ...passed 00:08:40.111 Suite: bdevio tests on: Nvme1n1 00:08:40.111 Test: blockdev write read block ...passed 00:08:40.111 Test: blockdev write zeroes read block ...passed 00:08:40.111 Test: blockdev write zeroes read no split ...passed 00:08:40.111 Test: blockdev write zeroes read split ...passed 00:08:40.111 Test: blockdev write zeroes read split partial ...passed 00:08:40.111 Test: blockdev reset ...[2024-05-14 02:55:26.007930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:40.111 [2024-05-14 02:55:26.010110] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.111 passed 00:08:40.111 Test: blockdev write read 8 blocks ...passed 00:08:40.111 Test: blockdev write read size > 128k ...passed 00:08:40.111 Test: blockdev write read invalid size ...passed 00:08:40.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.111 Test: blockdev write read max offset ...passed 00:08:40.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.111 Test: blockdev writev readv 8 blocks ...passed 00:08:40.111 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.111 Test: blockdev writev readv block ...passed 00:08:40.111 Test: blockdev writev readv size > 128k ...passed 00:08:40.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.111 Test: blockdev comparev and writev ...[2024-05-14 02:55:26.016145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0032000 len:0x1000 00:08:40.111 [2024-05-14 02:55:26.016197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme passthru rw ...passed 00:08:40.111 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:55:26.016959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.111 [2024-05-14 02:55:26.016999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme admin passthru ...passed 00:08:40.111 Test: blockdev copy ...passed 00:08:40.111 Suite: bdevio tests on: Nvme0n1p2 00:08:40.111 Test: blockdev write read block ...passed 00:08:40.111 Test: blockdev write zeroes read block ...passed 00:08:40.111 Test: blockdev write zeroes read no split ...passed 00:08:40.111 Test: blockdev write zeroes read split ...passed 00:08:40.111 Test: blockdev write zeroes read split partial ...passed 00:08:40.111 Test: blockdev reset ...[2024-05-14 02:55:26.031941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:40.111 [2024-05-14 02:55:26.033884] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.111 passed 00:08:40.111 Test: blockdev write read 8 blocks ...passed 00:08:40.111 Test: blockdev write read size > 128k ...passed 00:08:40.111 Test: blockdev write read invalid size ...passed 00:08:40.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.111 Test: blockdev write read max offset ...passed 00:08:40.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.111 Test: blockdev writev readv 8 blocks ...passed 00:08:40.111 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.111 Test: blockdev writev readv block ...passed 00:08:40.111 Test: blockdev writev readv size > 128k ...passed 00:08:40.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.111 Test: blockdev comparev and writev ...passed 00:08:40.111 Test: blockdev nvme passthru rw ...passed 00:08:40.111 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:55:26.038823] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:40.111 separate metadata which is not supported yet. 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme admin passthru ...passed 00:08:40.111 Test: blockdev copy ...passed 00:08:40.111 Suite: bdevio tests on: Nvme0n1p1 00:08:40.111 Test: blockdev write read block ...passed 00:08:40.111 Test: blockdev write zeroes read block ...passed 00:08:40.111 Test: blockdev write zeroes read no split ...passed 00:08:40.111 Test: blockdev write zeroes read split ...passed 00:08:40.111 Test: blockdev write zeroes read split partial ...passed 00:08:40.111 Test: blockdev reset ...[2024-05-14 02:55:26.052609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:40.111 [2024-05-14 02:55:26.054483] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.111 passed 00:08:40.111 Test: blockdev write read 8 blocks ...passed 00:08:40.111 Test: blockdev write read size > 128k ...passed 00:08:40.111 Test: blockdev write read invalid size ...passed 00:08:40.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.111 Test: blockdev write read max offset ...passed 00:08:40.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.111 Test: blockdev writev readv 8 blocks ...passed 00:08:40.111 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.111 Test: blockdev writev readv block ...passed 00:08:40.111 Test: blockdev writev readv size > 128k ...passed 00:08:40.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.111 Test: blockdev comparev and writev ...[2024-05-14 02:55:26.059554] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:40.111 separate metadata which is not supported yet. 00:08:40.111 passed 00:08:40.111 Test: blockdev nvme passthru rw ...passed 00:08:40.111 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.111 Test: blockdev nvme admin passthru ...passed 00:08:40.111 Test: blockdev copy ...passed 00:08:40.111 00:08:40.111 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.111 suites 7 7 n/a 0 0 00:08:40.111 tests 161 161 161 0 0 00:08:40.111 asserts 1006 1006 1006 0 n/a 00:08:40.111 00:08:40.112 Elapsed time = 0.367 seconds 00:08:40.112 0 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 79909 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 79909 ']' 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 79909 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79909 00:08:40.112 killing process with pid 79909 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79909' 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 79909 00:08:40.112 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 79909 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:40.371 00:08:40.371 real 0m1.539s 00:08:40.371 user 0m3.942s 00:08:40.371 sys 0m0.315s 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.371 ************************************ 00:08:40.371 END TEST bdev_bounds 00:08:40.371 ************************************ 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:40.371 02:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.371 02:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:40.371 02:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.371 02:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.371 ************************************ 00:08:40.371 START TEST bdev_nbd 00:08:40.371 ************************************ 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=79957 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 79957 /var/tmp/spdk-nbd.sock 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 79957 ']' 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.371 02:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:40.631 [2024-05-14 02:55:26.442080] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:40.631 [2024-05-14 02:55:26.442297] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.631 [2024-05-14 02:55:26.591537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.631 [2024-05-14 02:55:26.608442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.631 [2024-05-14 02:55:26.644787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.565 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.824 1+0 records in 00:08:41.824 1+0 records out 00:08:41.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483149 s, 8.5 MB/s 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.824 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.111 1+0 records in 00:08:42.111 1+0 records out 00:08:42.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103306 s, 4.0 MB/s 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.111 02:55:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.370 1+0 records in 00:08:42.370 1+0 records out 00:08:42.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536411 s, 7.6 MB/s 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.370 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:42.629 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.630 1+0 records in 00:08:42.630 1+0 records out 00:08:42.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681286 s, 6.0 MB/s 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.630 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:42.888 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.888 1+0 records in 00:08:42.888 1+0 records out 00:08:42.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551115 s, 7.4 MB/s 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.889 02:55:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.147 1+0 records in 00:08:43.147 1+0 records out 00:08:43.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00203932 s, 2.0 MB/s 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.147 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:08:43.405 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.406 1+0 records in 00:08:43.406 1+0 records out 00:08:43.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715926 s, 5.7 MB/s 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.406 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd0", 00:08:43.664 "bdev_name": "Nvme0n1p1" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd1", 00:08:43.664 "bdev_name": "Nvme0n1p2" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd2", 00:08:43.664 "bdev_name": "Nvme1n1" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd3", 00:08:43.664 "bdev_name": "Nvme2n1" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd4", 00:08:43.664 "bdev_name": "Nvme2n2" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd5", 00:08:43.664 "bdev_name": "Nvme2n3" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd6", 00:08:43.664 "bdev_name": "Nvme3n1" 00:08:43.664 } 00:08:43.664 ]' 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd0", 00:08:43.664 "bdev_name": "Nvme0n1p1" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd1", 00:08:43.664 "bdev_name": "Nvme0n1p2" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd2", 00:08:43.664 "bdev_name": "Nvme1n1" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd3", 00:08:43.664 "bdev_name": "Nvme2n1" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd4", 00:08:43.664 "bdev_name": "Nvme2n2" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd5", 00:08:43.664 "bdev_name": "Nvme2n3" 00:08:43.664 }, 00:08:43.664 { 00:08:43.664 "nbd_device": "/dev/nbd6", 00:08:43.664 "bdev_name": "Nvme3n1" 00:08:43.664 } 00:08:43.664 ]' 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.664 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.923 02:55:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.181 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.439 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.697 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.955 02:55:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.213 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.471 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.730 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:45.989 /dev/nbd0 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.989 1+0 records in 00:08:45.989 1+0 records out 00:08:45.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550041 s, 7.4 MB/s 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.989 02:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:46.248 /dev/nbd1 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.248 1+0 records in 00:08:46.248 1+0 records out 00:08:46.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040689 s, 10.1 MB/s 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.248 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:46.507 /dev/nbd10 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.507 1+0 records in 00:08:46.507 1+0 records out 00:08:46.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436221 s, 9.4 MB/s 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.507 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:46.765 /dev/nbd11 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.765 1+0 records in 00:08:46.765 1+0 records out 00:08:46.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485948 s, 8.4 MB/s 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:46.765 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.766 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:46.766 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:46.766 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.766 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.766 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:47.024 /dev/nbd12 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.024 1+0 records in 00:08:47.024 1+0 records out 00:08:47.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694661 s, 5.9 MB/s 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.024 02:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:47.283 /dev/nbd13 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.283 1+0 records in 00:08:47.283 1+0 records out 00:08:47.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062686 s, 6.5 MB/s 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.283 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:47.542 /dev/nbd14 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.542 1+0 records in 00:08:47.542 1+0 records out 00:08:47.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704854 s, 5.8 MB/s 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.542 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.800 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd0", 00:08:47.800 "bdev_name": "Nvme0n1p1" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd1", 00:08:47.800 "bdev_name": "Nvme0n1p2" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd10", 00:08:47.800 "bdev_name": "Nvme1n1" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd11", 00:08:47.800 "bdev_name": "Nvme2n1" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd12", 00:08:47.800 "bdev_name": "Nvme2n2" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd13", 00:08:47.800 "bdev_name": "Nvme2n3" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd14", 00:08:47.800 "bdev_name": "Nvme3n1" 00:08:47.800 } 00:08:47.800 ]' 00:08:47.800 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.800 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd0", 00:08:47.800 "bdev_name": "Nvme0n1p1" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd1", 00:08:47.800 "bdev_name": "Nvme0n1p2" 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "nbd_device": "/dev/nbd10", 00:08:47.800 "bdev_name": "Nvme1n1" 00:08:47.800 }, 00:08:47.800 { 00:08:47.801 "nbd_device": "/dev/nbd11", 00:08:47.801 "bdev_name": "Nvme2n1" 00:08:47.801 }, 00:08:47.801 { 00:08:47.801 "nbd_device": "/dev/nbd12", 00:08:47.801 "bdev_name": "Nvme2n2" 00:08:47.801 }, 00:08:47.801 { 00:08:47.801 "nbd_device": "/dev/nbd13", 00:08:47.801 "bdev_name": "Nvme2n3" 00:08:47.801 }, 00:08:47.801 { 00:08:47.801 "nbd_device": "/dev/nbd14", 00:08:47.801 "bdev_name": "Nvme3n1" 00:08:47.801 } 00:08:47.801 ]' 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.058 /dev/nbd1 00:08:48.058 /dev/nbd10 00:08:48.058 /dev/nbd11 00:08:48.058 /dev/nbd12 00:08:48.058 /dev/nbd13 00:08:48.058 /dev/nbd14' 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.058 /dev/nbd1 00:08:48.058 /dev/nbd10 00:08:48.058 /dev/nbd11 00:08:48.058 /dev/nbd12 00:08:48.058 /dev/nbd13 00:08:48.058 /dev/nbd14' 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:48.058 256+0 records in 00:08:48.058 256+0 records out 00:08:48.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00804169 s, 130 MB/s 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.058 02:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.058 256+0 records in 00:08:48.058 256+0 records out 00:08:48.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172615 s, 6.1 MB/s 00:08:48.058 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.058 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.315 256+0 records in 00:08:48.315 256+0 records out 00:08:48.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175473 s, 6.0 MB/s 00:08:48.315 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.315 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:48.572 256+0 records in 00:08:48.572 256+0 records out 00:08:48.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167302 s, 6.3 MB/s 00:08:48.572 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.572 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:48.572 256+0 records in 00:08:48.572 256+0 records out 00:08:48.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148144 s, 7.1 MB/s 00:08:48.573 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.573 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:48.830 256+0 records in 00:08:48.830 256+0 records out 00:08:48.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183401 s, 5.7 MB/s 00:08:48.830 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.830 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:49.088 256+0 records in 00:08:49.088 256+0 records out 00:08:49.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1731 s, 6.1 MB/s 00:08:49.088 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.088 02:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:49.088 256+0 records in 00:08:49.088 256+0 records out 00:08:49.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196269 s, 5.3 MB/s 00:08:49.088 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:49.088 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.088 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.088 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:49.088 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.088 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:49.089 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:49.089 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.089 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.347 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.606 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.864 02:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.123 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.382 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.640 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.918 02:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.177 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.434 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.434 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.434 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.434 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.434 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:51.693 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:51.951 malloc_lvol_verify 00:08:51.951 02:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:52.209 db1c47df-27b3-4160-bd22-cd7452848697 00:08:52.209 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:52.468 f3ca5eb7-e691-4f80-b30b-5ea6c871b086 00:08:52.468 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:52.727 /dev/nbd0 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:52.727 mke2fs 1.46.5 (30-Dec-2021) 00:08:52.727 Discarding device blocks: 0/4096 done 00:08:52.727 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:52.727 00:08:52.727 Allocating group tables: 0/1 done 00:08:52.727 Writing inode tables: 0/1 done 00:08:52.727 Creating journal (1024 blocks): done 00:08:52.727 Writing superblocks and filesystem accounting information: 0/1 done 00:08:52.727 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.727 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.999 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.999 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.999 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.999 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 79957 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 79957 ']' 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 79957 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79957 00:08:53.000 killing process with pid 79957 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79957' 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 79957 00:08:53.000 02:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 79957 00:08:53.269 ************************************ 00:08:53.269 END TEST bdev_nbd 00:08:53.269 ************************************ 00:08:53.269 02:55:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:53.269 00:08:53.269 real 0m12.800s 00:08:53.269 user 0m18.409s 00:08:53.269 sys 0m4.501s 00:08:53.269 02:55:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.269 02:55:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 02:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:53.269 02:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:08:53.269 02:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:08:53.269 skipping fio tests on NVMe due to multi-ns failures. 00:08:53.269 02:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:53.269 02:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:53.269 02:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.269 02:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:53.269 02:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:53.269 02:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 ************************************ 00:08:53.269 START TEST bdev_verify 00:08:53.269 ************************************ 00:08:53.269 02:55:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.269 [2024-05-14 02:55:39.284529] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:53.269 [2024-05-14 02:55:39.284727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80393 ] 00:08:53.529 [2024-05-14 02:55:39.430085] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:53.529 [2024-05-14 02:55:39.451320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.529 [2024-05-14 02:55:39.501094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.529 [2024-05-14 02:55:39.501168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.096 Running I/O for 5 seconds... 00:08:59.384 00:08:59.384 Latency(us) 00:08:59.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.384 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0x5e800 00:08:59.384 Nvme0n1p1 : 5.08 1335.68 5.22 0.00 0.00 95618.72 17754.30 90082.21 00:08:59.384 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x5e800 length 0x5e800 00:08:59.384 Nvme0n1p1 : 5.08 1221.48 4.77 0.00 0.00 103905.09 10545.34 95801.72 00:08:59.384 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0x5e7ff 00:08:59.384 Nvme0n1p2 : 5.08 1335.11 5.22 0.00 0.00 95438.00 18111.77 79596.45 00:08:59.384 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:59.384 Nvme0n1p2 : 5.09 1231.34 4.81 0.00 0.00 103159.41 7566.43 95801.72 00:08:59.384 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0xa0000 00:08:59.384 Nvme1n1 : 5.08 1334.63 5.21 0.00 0.00 95286.59 18111.77 74830.20 00:08:59.384 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0xa0000 length 0xa0000 00:08:59.384 Nvme1n1 : 5.09 1231.03 4.81 0.00 0.00 102964.10 7685.59 99614.72 00:08:59.384 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0x80000 00:08:59.384 Nvme2n1 : 5.08 1334.12 5.21 0.00 0.00 95105.15 18350.08 72447.07 00:08:59.384 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x80000 length 0x80000 00:08:59.384 Nvme2n1 : 5.10 1230.71 4.81 0.00 0.00 102759.39 7685.59 104380.97 00:08:59.384 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0x80000 00:08:59.384 Nvme2n2 : 5.09 1333.69 5.21 0.00 0.00 94921.84 17754.30 73400.32 00:08:59.384 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x80000 length 0x80000 00:08:59.384 Nvme2n2 : 5.10 1230.37 4.81 0.00 0.00 102565.54 7804.74 109147.23 00:08:59.384 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0x80000 00:08:59.384 Nvme2n3 : 5.09 1333.27 5.21 0.00 0.00 94745.41 17515.99 76736.70 00:08:59.384 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x80000 length 0x80000 00:08:59.384 Nvme2n3 : 5.08 1222.41 4.78 0.00 0.00 104219.28 10366.60 108193.98 00:08:59.384 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x0 length 0x20000 00:08:59.384 Nvme3n1 : 5.09 1332.82 5.21 0.00 0.00 94570.37 13345.51 81026.33 00:08:59.384 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.384 Verification LBA range: start 0x20000 length 0x20000 00:08:59.384 Nvme3n1 : 5.08 1221.98 4.77 0.00 0.00 104091.34 10902.81 101521.22 00:08:59.384 =================================================================================================================== 00:08:59.384 Total : 17928.66 70.03 0.00 0.00 99066.38 7566.43 109147.23 00:08:59.384 00:08:59.384 real 0m6.200s 00:08:59.384 user 0m11.562s 00:08:59.384 sys 0m0.229s 00:08:59.384 02:55:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.384 02:55:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:59.384 ************************************ 00:08:59.384 END TEST bdev_verify 00:08:59.384 ************************************ 00:08:59.643 02:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:59.643 02:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:59.643 02:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.643 02:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 ************************************ 00:08:59.643 START TEST bdev_verify_big_io 00:08:59.643 ************************************ 00:08:59.643 02:55:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:59.643 [2024-05-14 02:55:45.527661] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:08:59.643 [2024-05-14 02:55:45.527804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80475 ] 00:08:59.643 [2024-05-14 02:55:45.664808] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.902 [2024-05-14 02:55:45.682960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.902 [2024-05-14 02:55:45.716659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.902 [2024-05-14 02:55:45.716684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.160 Running I/O for 5 seconds... 00:09:06.726 00:09:06.726 Latency(us) 00:09:06.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.726 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0x5e80 00:09:06.726 Nvme0n1p1 : 5.96 102.45 6.40 0.00 0.00 1182517.21 21209.83 1243039.19 00:09:06.726 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x5e80 length 0x5e80 00:09:06.726 Nvme0n1p1 : 5.81 104.66 6.54 0.00 0.00 1152248.21 27167.65 1220161.16 00:09:06.726 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0x5e7f 00:09:06.726 Nvme0n1p2 : 5.84 100.16 6.26 0.00 0.00 1190531.36 107717.35 1189657.13 00:09:06.726 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:06.726 Nvme0n1p2 : 5.95 91.46 5.72 0.00 0.00 1285850.05 77689.95 1952257.86 00:09:06.726 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0xa000 00:09:06.726 Nvme1n1 : 5.97 98.88 6.18 0.00 0.00 1176965.97 122016.12 1868371.78 00:09:06.726 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0xa000 length 0xa000 00:09:06.726 Nvme1n1 : 5.81 108.88 6.81 0.00 0.00 1062173.76 127735.62 1304047.24 00:09:06.726 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0x8000 00:09:06.726 Nvme2n1 : 5.97 98.51 6.16 0.00 0.00 1140231.56 122969.37 1891249.80 00:09:06.726 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x8000 length 0x8000 00:09:06.726 Nvme2n1 : 5.95 112.43 7.03 0.00 0.00 999214.99 102951.10 1098145.05 00:09:06.726 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0x8000 00:09:06.726 Nvme2n2 : 6.09 107.32 6.71 0.00 0.00 1025629.11 51713.86 1914127.83 00:09:06.726 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x8000 length 0x8000 00:09:06.726 Nvme2n2 : 6.09 122.13 7.63 0.00 0.00 900647.22 37176.79 1037136.99 00:09:06.726 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0x8000 00:09:06.726 Nvme2n3 : 6.12 112.49 7.03 0.00 0.00 948789.04 17754.30 1967509.88 00:09:06.726 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x8000 length 0x8000 00:09:06.726 Nvme2n3 : 6.10 125.94 7.87 0.00 0.00 850889.08 56480.12 1067641.02 00:09:06.726 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x0 length 0x2000 00:09:06.726 Nvme3n1 : 6.13 123.09 7.69 0.00 0.00 840603.10 3023.59 1998013.91 00:09:06.726 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.726 Verification LBA range: start 0x2000 length 0x2000 00:09:06.726 Nvme3n1 : 6.11 136.18 8.51 0.00 0.00 766273.53 2904.44 1090519.04 00:09:06.726 =================================================================================================================== 00:09:06.726 Total : 1544.58 96.54 0.00 0.00 1018902.90 2904.44 1998013.91 00:09:06.986 00:09:06.986 real 0m7.420s 00:09:06.986 user 0m14.013s 00:09:06.986 sys 0m0.248s 00:09:06.986 02:55:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:06.986 ************************************ 00:09:06.986 02:55:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:06.986 END TEST bdev_verify_big_io 00:09:06.986 ************************************ 00:09:06.986 02:55:52 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:06.986 02:55:52 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:06.986 02:55:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:06.986 02:55:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.986 ************************************ 00:09:06.986 START TEST bdev_write_zeroes 00:09:06.986 ************************************ 00:09:06.986 02:55:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.245 [2024-05-14 02:55:53.030766] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:09:07.245 [2024-05-14 02:55:53.030977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80574 ] 00:09:07.245 [2024-05-14 02:55:53.184834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:07.245 [2024-05-14 02:55:53.206698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.245 [2024-05-14 02:55:53.253628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.830 Running I/O for 1 seconds... 00:09:08.762 00:09:08.762 Latency(us) 00:09:08.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.762 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme0n1p1 : 1.02 7222.48 28.21 0.00 0.00 17637.89 13583.83 29312.47 00:09:08.762 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme0n1p2 : 1.02 7209.64 28.16 0.00 0.00 17636.15 14000.87 28955.00 00:09:08.762 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme1n1 : 1.02 7243.47 28.29 0.00 0.00 17518.46 10843.23 24427.05 00:09:08.762 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme2n1 : 1.03 7232.19 28.25 0.00 0.00 17490.21 11617.75 23950.43 00:09:08.762 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme2n2 : 1.03 7221.34 28.21 0.00 0.00 17451.38 10724.07 22163.08 00:09:08.762 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme2n3 : 1.03 7210.68 28.17 0.00 0.00 17430.98 9711.24 21090.68 00:09:08.762 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.762 Nvme3n1 : 1.03 7199.53 28.12 0.00 0.00 17417.35 9234.62 21090.68 00:09:08.762 =================================================================================================================== 00:09:08.762 Total : 50539.34 197.42 0.00 0.00 17511.46 9234.62 29312.47 00:09:09.022 00:09:09.022 real 0m2.013s 00:09:09.022 user 0m1.669s 00:09:09.022 sys 0m0.226s 00:09:09.022 02:55:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.022 02:55:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 ************************************ 00:09:09.022 END TEST bdev_write_zeroes 00:09:09.022 ************************************ 00:09:09.022 02:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.022 02:55:54 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:09.022 02:55:54 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.022 02:55:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 ************************************ 00:09:09.022 START TEST bdev_json_nonenclosed 00:09:09.022 ************************************ 00:09:09.022 02:55:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.281 [2024-05-14 02:55:55.097164] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:09:09.281 [2024-05-14 02:55:55.097341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80616 ] 00:09:09.281 [2024-05-14 02:55:55.248555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.281 [2024-05-14 02:55:55.274158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.540 [2024-05-14 02:55:55.321062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.541 [2024-05-14 02:55:55.321261] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:09.541 [2024-05-14 02:55:55.321317] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:09.541 [2024-05-14 02:55:55.321376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.541 00:09:09.541 real 0m0.447s 00:09:09.541 user 0m0.209s 00:09:09.541 sys 0m0.133s 00:09:09.541 02:55:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.541 02:55:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:09.541 ************************************ 00:09:09.541 END TEST bdev_json_nonenclosed 00:09:09.541 ************************************ 00:09:09.541 02:55:55 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.541 02:55:55 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:09.541 02:55:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.541 02:55:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.541 ************************************ 00:09:09.541 START TEST bdev_json_nonarray 00:09:09.541 ************************************ 00:09:09.541 02:55:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.800 [2024-05-14 02:55:55.604214] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:09:09.800 [2024-05-14 02:55:55.604477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80641 ] 00:09:09.800 [2024-05-14 02:55:55.755641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.800 [2024-05-14 02:55:55.779257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.800 [2024-05-14 02:55:55.824995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.800 [2024-05-14 02:55:55.825183] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:09.800 [2024-05-14 02:55:55.825238] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:09.800 [2024-05-14 02:55:55.825271] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.060 00:09:10.060 real 0m0.445s 00:09:10.060 user 0m0.202s 00:09:10.060 sys 0m0.138s 00:09:10.060 ************************************ 00:09:10.060 END TEST bdev_json_nonarray 00:09:10.060 ************************************ 00:09:10.060 02:55:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.060 02:55:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:10.060 02:55:55 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:10.060 02:55:55 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:10.060 02:55:55 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:10.060 02:55:55 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:10.060 02:55:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.060 02:55:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:10.060 ************************************ 00:09:10.060 START TEST bdev_gpt_uuid 00:09:10.060 ************************************ 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80667 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 80667 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 80667 ']' 00:09:10.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:10.060 02:55:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:10.319 [2024-05-14 02:55:56.125345] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:09:10.319 [2024-05-14 02:55:56.125560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80667 ] 00:09:10.319 [2024-05-14 02:55:56.278479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:10.319 [2024-05-14 02:55:56.299901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.319 [2024-05-14 02:55:56.345514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.255 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:11.255 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:09:11.255 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.255 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.255 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.514 Some configs were skipped because the RPC state that can call them passed over. 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:11.514 { 00:09:11.514 "name": "Nvme0n1p1", 00:09:11.514 "aliases": [ 00:09:11.514 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:11.514 ], 00:09:11.514 "product_name": "GPT Disk", 00:09:11.514 "block_size": 4096, 00:09:11.514 "num_blocks": 774144, 00:09:11.514 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:11.514 "md_size": 64, 00:09:11.514 "md_interleave": false, 00:09:11.514 "dif_type": 0, 00:09:11.514 "assigned_rate_limits": { 00:09:11.514 "rw_ios_per_sec": 0, 00:09:11.514 "rw_mbytes_per_sec": 0, 00:09:11.514 "r_mbytes_per_sec": 0, 00:09:11.514 "w_mbytes_per_sec": 0 00:09:11.514 }, 00:09:11.514 "claimed": false, 00:09:11.514 "zoned": false, 00:09:11.514 "supported_io_types": { 00:09:11.514 "read": true, 00:09:11.514 "write": true, 00:09:11.514 "unmap": true, 00:09:11.514 "write_zeroes": true, 00:09:11.514 "flush": true, 00:09:11.514 "reset": true, 00:09:11.514 "compare": true, 00:09:11.514 "compare_and_write": false, 00:09:11.514 "abort": true, 00:09:11.514 "nvme_admin": false, 00:09:11.514 "nvme_io": false 00:09:11.514 }, 00:09:11.514 "driver_specific": { 00:09:11.514 "gpt": { 00:09:11.514 "base_bdev": "Nvme0n1", 00:09:11.514 "offset_blocks": 256, 00:09:11.514 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:11.514 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:11.514 "partition_name": "SPDK_TEST_first" 00:09:11.514 } 00:09:11.514 } 00:09:11.514 } 00:09:11.514 ]' 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:11.514 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:11.773 { 00:09:11.773 "name": "Nvme0n1p2", 00:09:11.773 "aliases": [ 00:09:11.773 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:11.773 ], 00:09:11.773 "product_name": "GPT Disk", 00:09:11.773 "block_size": 4096, 00:09:11.773 "num_blocks": 774143, 00:09:11.773 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:11.773 "md_size": 64, 00:09:11.773 "md_interleave": false, 00:09:11.773 "dif_type": 0, 00:09:11.773 "assigned_rate_limits": { 00:09:11.773 "rw_ios_per_sec": 0, 00:09:11.773 "rw_mbytes_per_sec": 0, 00:09:11.773 "r_mbytes_per_sec": 0, 00:09:11.773 "w_mbytes_per_sec": 0 00:09:11.773 }, 00:09:11.773 "claimed": false, 00:09:11.773 "zoned": false, 00:09:11.773 "supported_io_types": { 00:09:11.773 "read": true, 00:09:11.773 "write": true, 00:09:11.773 "unmap": true, 00:09:11.773 "write_zeroes": true, 00:09:11.773 "flush": true, 00:09:11.773 "reset": true, 00:09:11.773 "compare": true, 00:09:11.773 "compare_and_write": false, 00:09:11.773 "abort": true, 00:09:11.773 "nvme_admin": false, 00:09:11.773 "nvme_io": false 00:09:11.773 }, 00:09:11.773 "driver_specific": { 00:09:11.773 "gpt": { 00:09:11.773 "base_bdev": "Nvme0n1", 00:09:11.773 "offset_blocks": 774400, 00:09:11.773 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:11.773 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:11.773 "partition_name": "SPDK_TEST_second" 00:09:11.773 } 00:09:11.773 } 00:09:11.773 } 00:09:11.773 ]' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 80667 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 80667 ']' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 80667 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80667 00:09:11.773 killing process with pid 80667 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80667' 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 80667 00:09:11.773 02:55:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 80667 00:09:12.340 00:09:12.340 real 0m2.111s 00:09:12.340 user 0m2.463s 00:09:12.340 sys 0m0.436s 00:09:12.340 02:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.340 ************************************ 00:09:12.340 END TEST bdev_gpt_uuid 00:09:12.340 ************************************ 00:09:12.340 02:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:12.340 02:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.857 Waiting for block devices as requested 00:09:12.857 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.857 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.116 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.116 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.382 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.382 02:56:04 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:18.382 02:56:04 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:18.382 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:18.382 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:18.382 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:18.382 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:18.382 02:56:04 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:18.382 00:09:18.382 real 0m51.900s 00:09:18.382 user 1m5.819s 00:09:18.382 sys 0m9.450s 00:09:18.382 ************************************ 00:09:18.383 END TEST blockdev_nvme_gpt 00:09:18.383 ************************************ 00:09:18.383 02:56:04 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.383 02:56:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 02:56:04 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.383 02:56:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:18.383 02:56:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.383 02:56:04 -- common/autotest_common.sh@10 -- # set +x 00:09:18.641 ************************************ 00:09:18.641 START TEST nvme 00:09:18.641 ************************************ 00:09:18.641 02:56:04 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.641 * Looking for test storage... 00:09:18.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.641 02:56:04 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:19.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.776 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.776 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.776 02:56:05 nvme -- nvme/nvme.sh@79 -- # uname 00:09:19.776 02:56:05 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:19.776 02:56:05 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:19.776 02:56:05 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:09:19.776 Waiting for stub to ready for secondary processes... 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1067 -- # stubpid=81281 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/81281 ]] 00:09:19.776 02:56:05 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:19.776 [2024-05-14 02:56:05.796309] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:09:19.776 [2024-05-14 02:56:05.796667] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:20.749 [2024-05-14 02:56:06.550695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.749 [2024-05-14 02:56:06.574962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.749 [2024-05-14 02:56:06.606936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.749 [2024-05-14 02:56:06.607026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.749 [2024-05-14 02:56:06.607086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.749 [2024-05-14 02:56:06.626753] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:20.749 [2024-05-14 02:56:06.626858] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.749 [2024-05-14 02:56:06.635904] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:20.749 [2024-05-14 02:56:06.638098] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:20.749 [2024-05-14 02:56:06.639408] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.749 [2024-05-14 02:56:06.640045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:20.749 [2024-05-14 02:56:06.640404] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:20.749 [2024-05-14 02:56:06.641961] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.749 [2024-05-14 02:56:06.642868] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:20.749 [2024-05-14 02:56:06.643775] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:20.749 [2024-05-14 02:56:06.645225] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.749 [2024-05-14 02:56:06.645678] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:20.749 [2024-05-14 02:56:06.645835] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:20.749 [2024-05-14 02:56:06.645971] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:20.749 [2024-05-14 02:56:06.646225] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:20.749 02:56:06 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:20.749 done. 00:09:20.749 02:56:06 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:09:20.749 02:56:06 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:20.749 02:56:06 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:20.749 02:56:06 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.749 02:56:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.749 ************************************ 00:09:20.749 START TEST nvme_reset 00:09:20.749 ************************************ 00:09:20.749 02:56:06 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:21.008 Initializing NVMe Controllers 00:09:21.008 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:21.008 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:21.008 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:21.008 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:21.008 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:21.008 ************************************ 00:09:21.008 END TEST nvme_reset 00:09:21.008 ************************************ 00:09:21.008 00:09:21.008 real 0m0.249s 00:09:21.008 user 0m0.102s 00:09:21.008 sys 0m0.102s 00:09:21.008 02:56:07 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.008 02:56:07 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:21.267 02:56:07 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:21.267 02:56:07 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:21.267 02:56:07 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.267 02:56:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.267 ************************************ 00:09:21.267 START TEST nvme_identify 00:09:21.267 ************************************ 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:09:21.267 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:21.267 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:21.267 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.267 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:21.267 02:56:07 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.267 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:21.529 ===================================================== 00:09:21.529 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.529 ===================================================== 00:09:21.529 Controller Capabilities/Features 00:09:21.529 ================================ 00:09:21.529 Vendor ID: 1b36 00:09:21.529 Subsystem Vendor ID: 1af4 00:09:21.529 Serial Number: 12340 00:09:21.529 Model Number: QEMU NVMe Ctrl 00:09:21.529 Firmware Version: 8.0.0 00:09:21.529 Recommended Arb Burst: 6 00:09:21.529 IEEE OUI Identifier: 00 54 52 00:09:21.529 Multi-path I/O 00:09:21.529 May have multiple subsystem ports: No 00:09:21.529 May have multiple controllers: No 00:09:21.529 Associated with SR-IOV VF: No 00:09:21.529 Max Data Transfer Size: 524288 00:09:21.529 Max Number of Namespaces: 256 00:09:21.529 Max Number of I/O Queues: 64 00:09:21.529 NVMe Specification Version (VS): 1.4 00:09:21.529 NVMe Specification Version (Identify): 1.4 00:09:21.529 Maximum Queue Entries: 2048 00:09:21.529 Contiguous Queues Required: Yes 00:09:21.529 Arbitration Mechanisms Supported 00:09:21.529 Weighted Round Robin: Not Supported 00:09:21.529 Vendor Specific: Not Supported 00:09:21.529 Reset Timeout: 7500 ms 00:09:21.529 Doorbell Stride: 4 bytes 00:09:21.529 NVM Subsystem Reset: Not Supported 00:09:21.529 Command Sets Supported 00:09:21.529 NVM Command Set: Supported 00:09:21.529 Boot Partition: Not Supported 00:09:21.529 Memory Page Size Minimum: 4096 bytes 00:09:21.529 Memory Page Size Maximum: 65536 bytes 00:09:21.529 Persistent Memory Region: Not Supported 00:09:21.529 Optional Asynchronous Events Supported 00:09:21.529 Namespace Attribute Notices: Supported 00:09:21.529 Firmware Activation Notices: Not Supported 00:09:21.529 ANA Change Notices: Not Supported 00:09:21.529 PLE Aggregate Log Change Notices: Not Supported 00:09:21.529 LBA Status Info Alert Notices: Not Supported 00:09:21.529 EGE Aggregate Log Change Notices: Not Supported 00:09:21.529 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.529 Zone Descriptor Change Notices: Not Supported 00:09:21.529 Discovery Log Change Notices: Not Supported 00:09:21.529 Controller Attributes 00:09:21.529 128-bit Host Identifier: Not Supported 00:09:21.529 Non-Operational Permissive Mode: Not Supported 00:09:21.529 NVM Sets: Not Supported 00:09:21.529 Read Recovery Levels: Not Supported 00:09:21.529 Endurance Groups: Not Supported 00:09:21.529 Predictable Latency Mode: Not Supported 00:09:21.529 Traffic Based Keep ALive: Not Supported 00:09:21.529 Namespace Granularity: Not Supported 00:09:21.529 SQ Associations: Not Supported 00:09:21.529 UUID List: Not Supported 00:09:21.529 Multi-Domain Subsystem: Not Supported 00:09:21.529 Fixed Capacity Management: Not Supported 00:09:21.529 Variable Capacity Management: Not Supported 00:09:21.529 Delete Endurance Group: Not Supported 00:09:21.529 Delete NVM Set: Not Supported 00:09:21.529 Extended LBA Formats Supported: Supported 00:09:21.529 Flexible Data Placement Supported: Not Supported 00:09:21.529 00:09:21.529 Controller Memory Buffer Support 00:09:21.529 ================================ 00:09:21.529 Supported: No 00:09:21.529 00:09:21.529 Persistent Memory Region Support 00:09:21.529 ================================ 00:09:21.529 Supported: No 00:09:21.529 00:09:21.529 Admin Command Set Attributes 00:09:21.529 ============================ 00:09:21.529 Security Send/Receive: Not Supported 00:09:21.529 Format NVM: Supported 00:09:21.529 Firmware Activate/Download: Not Supported 00:09:21.529 Namespace Management: Supported 00:09:21.529 Device Self-Test: Not Supported 00:09:21.529 Directives: Supported 00:09:21.529 NVMe-MI: Not Supported 00:09:21.529 Virtualization Management: Not Supported 00:09:21.529 Doorbell Buffer Config: Supported 00:09:21.529 Get LBA Status Capability: Not Supported 00:09:21.529 Command & Feature Lockdown Capability: Not Supported 00:09:21.529 Abort Command Limit: 4 00:09:21.529 Async Event Request Limit: 4 00:09:21.529 Number of Firmware Slots: N/A 00:09:21.529 Firmware Slot 1 Read-Only: N/A 00:09:21.529 Firmware Activation Without Reset: N/A 00:09:21.529 Multiple Update Detection Support: N/A 00:09:21.529 Firmware Update Granularity: No Information Provided 00:09:21.529 Per-Namespace SMART Log: Yes 00:09:21.529 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.529 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:21.529 Command Effects Log Page: Supported 00:09:21.529 Get Log Page Extended Data: Supported 00:09:21.529 Telemetry Log Pages: Not Supported 00:09:21.529 Persistent Event Log Pages: Not Supported 00:09:21.529 Supported Log Pages Log Page: May Support 00:09:21.529 Commands Supported & Effects Log Page: Not Supported 00:09:21.529 Feature Identifiers & Effects Log Page:May Support 00:09:21.529 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.529 Data Area 4 for Telemetry Log: Not Supported 00:09:21.529 Error Log Page Entries Supported: 1 00:09:21.529 Keep Alive: Not Supported 00:09:21.529 00:09:21.529 NVM Command Set Attributes 00:09:21.529 ========================== 00:09:21.529 Submission Queue Entry Size 00:09:21.529 Max: 64 00:09:21.529 Min: 64 00:09:21.529 Completion Queue Entry Size 00:09:21.529 Max: 16 00:09:21.529 Min: 16 00:09:21.529 Number of Namespaces: 256 00:09:21.529 Compare Command: Supported 00:09:21.529 Write Uncorrectable Command: Not Supported 00:09:21.529 Dataset Management Command: Supported 00:09:21.529 Write Zeroes Command: Supported 00:09:21.529 Set Features Save Field: Supported 00:09:21.529 Reservations: Not Supported 00:09:21.529 Timestamp: Supported 00:09:21.529 Copy: Supported 00:09:21.529 Volatile Write Cache: Present 00:09:21.529 Atomic Write Unit (Normal): 1 00:09:21.529 Atomic Write Unit (PFail): 1 00:09:21.529 Atomic Compare & Write Unit: 1 00:09:21.529 Fused Compare & Write: Not Supported 00:09:21.529 Scatter-Gather List 00:09:21.529 SGL Command Set: Supported 00:09:21.529 SGL Keyed: Not Supported 00:09:21.529 SGL Bit Bucket Descriptor: Not Supported 00:09:21.529 SGL Metadata Pointer: Not Supported 00:09:21.529 Oversized SGL: Not Supported 00:09:21.529 SGL Metadata Address: Not Supported 00:09:21.529 SGL Offset: Not Supported 00:09:21.529 Transport SGL Data Block: Not Supported 00:09:21.529 Replay Protected Memory Block: Not Supported 00:09:21.529 00:09:21.529 Firmware Slot Information 00:09:21.529 ========================= 00:09:21.529 Active slot: 1 00:09:21.529 Slot 1 Firmware Revision: 1.0 00:09:21.529 00:09:21.529 00:09:21.529 Commands Supported and Effects 00:09:21.529 ============================== 00:09:21.529 Admin Commands 00:09:21.530 -------------- 00:09:21.530 Delete I/O Submission Queue (00h): Supported 00:09:21.530 Create I/O Submission Queue (01h): Supported 00:09:21.530 Get Log Page (02h): Supported 00:09:21.530 Delete I/O Completion Queue (04h): Supported 00:09:21.530 Create I/O Completion Queue (05h): Supported 00:09:21.530 Identify (06h): Supported 00:09:21.530 Abort (08h): Supported 00:09:21.530 Set Features (09h): Supported 00:09:21.530 Get Features (0Ah): Supported 00:09:21.530 Asynchronous Event Request (0Ch): Supported 00:09:21.530 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.530 Directive Send (19h): Supported 00:09:21.530 Directive Receive (1Ah): Supported 00:09:21.530 Virtualization Management (1Ch): Supported 00:09:21.530 Doorbell Buffer Config (7Ch): Supported 00:09:21.530 Format NVM (80h): Supported LBA-Change 00:09:21.530 I/O Commands 00:09:21.530 ------------ 00:09:21.530 Flush (00h): Supported LBA-Change 00:09:21.530 Write (01h): Supported LBA-Change 00:09:21.530 Read (02h): Supported 00:09:21.530 Compare (05h): Supported 00:09:21.530 Write Zeroes (08h): Supported LBA-Change 00:09:21.530 Dataset Management (09h): Supported LBA-Change 00:09:21.530 Unknown (0Ch): Supported 00:09:21.530 Unknown (12h): Supported 00:09:21.530 Copy (19h): Supported LBA-Change 00:09:21.530 Unknown (1Dh): Supported LBA-Change 00:09:21.530 00:09:21.530 Error Log 00:09:21.530 ========= 00:09:21.530 00:09:21.530 Arbitration 00:09:21.530 =========== 00:09:21.530 Arbitration Burst: no limit 00:09:21.530 00:09:21.530 Power Management 00:09:21.530 ================ 00:09:21.530 Number of Power States: 1 00:09:21.530 Current Power State: Power State #0 00:09:21.530 Power State #0: 00:09:21.530 Max Power: 25.00 W 00:09:21.530 Non-Operational State: Operational 00:09:21.530 Entry Latency: 16 microseconds 00:09:21.530 Exit Latency: 4 microseconds 00:09:21.530 Relative Read Throughput: 0 00:09:21.530 Relative Read Latency: 0 00:09:21.530 Relative Write Throughput: 0 00:09:21.530 Relative Write Latency: 0 00:09:21.530 Idle Power: Not Reported 00:09:21.530 Active Power: Not Reported 00:09:21.530 Non-Operational Permissive Mode: Not Supported 00:09:21.530 00:09:21.530 Health Information 00:09:21.530 ================== 00:09:21.530 Critical Warnings: 00:09:21.530 Available Spare Space: OK 00:09:21.530 Temperature: OK 00:09:21.530 Device Reliability: OK 00:09:21.530 Read Only: No 00:09:21.530 Volatile Memory Backup: OK 00:09:21.530 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.530 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.530 Available Spare: 0% 00:09:21.530 Available Spare Threshold: 0% 00:09:21.530 Life Percentage Used: 0% 00:09:21.530 Data Units Read: 1016 00:09:21.530 Data Units Written: 844 00:09:21.530 Host Read Commands: 48670 00:09:21.530 Host Write Commands: 47106 00:09:21.530 Controller Busy Time: 0 minutes 00:09:21.530 Power Cycles: 0 00:09:21.530 Power On Hours: 0 hours 00:09:21.530 Unsafe Shutdowns: 0 00:09:21.530 Unrecoverable Media Errors: 0 00:09:21.530 Lifetime Error Log Entries: 0 00:09:21.530 Warning Temperature Time: 0 minutes 00:09:21.530 Critical Temperature Time: 0 minutes 00:09:21.530 00:09:21.530 Number of Queues 00:09:21.530 ================ 00:09:21.530 Number of I/O Submission Queues: 64 00:09:21.530 Number of I/O Completion Queues: 64 00:09:21.530 00:09:21.530 ZNS Specific Controller Data 00:09:21.530 ============================ 00:09:21.530 Zone Append Size Limit: 0 00:09:21.530 00:09:21.530 00:09:21.530 Active Namespaces 00:09:21.530 ================= 00:09:21.530 Namespace ID:1 00:09:21.530 Error Recovery Timeout: Unlimited 00:09:21.530 Command Set Identifier: NVM (00h) 00:09:21.530 Deallocate: Supported 00:09:21.530 Deallocated/Unwritten Error: Supported 00:09:21.530 Deallocated Read Value: All 0x00 00:09:21.530 Deallocate in Write Zeroes: Not Supported 00:09:21.530 Deallocated Guard Field: 0xFFFF 00:09:21.530 Flush: Supported 00:09:21.530 Reservation: Not Supported 00:09:21.530 Metadata Transferred as: Separate Metadata Buffer 00:09:21.530 Namespace Sharing Capabilities: Private 00:09:21.530 Size (in LBAs): 1548666 (5GiB) 00:09:21.530 Capacity (in LBAs): 1548666 (5GiB) 00:09:21.530 Utilization (in LBAs): 1548666 (5GiB) 00:09:21.530 Thin Provisioning: Not Supported 00:09:21.530 Per-NS Atomic Units: No 00:09:21.530 Maximum Single Source Range Length: 128 00:09:21.530 Maximum Copy Length: 128 00:09:21.530 Maximum Source Range Count: 128 00:09:21.530 NGUID/EUI64 Never Reused: No 00:09:21.530 Namespace Write Protected: No 00:09:21.530 Number of LBA Formats: 8 00:09:21.530 Current LBA Format: LBA Format #07 00:09:21.530 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.530 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.530 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.530 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.530 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.530 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.530 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.530 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.530 00:09:21.530 ===================================================== 00:09:21.530 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.530 ===================================================== 00:09:21.530 Controller Capabilities/Features 00:09:21.530 ================================ 00:09:21.530 Vendor ID: 1b36 00:09:21.530 Subsystem Vendor ID: 1af4 00:09:21.530 Serial Number: 12341 00:09:21.530 Model Number: QEMU NVMe Ctrl 00:09:21.530 Firmware Version: 8.0.0 00:09:21.530 Recommended Arb Burst: 6 00:09:21.530 IEEE OUI Identifier: 00 54 52 00:09:21.530 Multi-path I/O 00:09:21.530 May have multiple subsystem ports: No 00:09:21.530 May have multiple controllers: No 00:09:21.530 Associated with SR-IOV VF: No 00:09:21.530 Max Data Transfer Size: 524288 00:09:21.530 Max Number of Namespaces: 256 00:09:21.530 Max Number of I/O Queues: 64 00:09:21.530 NVMe Specification Version (VS): 1.4 00:09:21.530 NVMe Specification Version (Identify): 1.4 00:09:21.530 Maximum Queue Entries: 2048 00:09:21.530 Contiguous Queues Required: Yes 00:09:21.530 Arbitration Mechanisms Supported 00:09:21.530 Weighted Round Robin: Not Supported 00:09:21.530 Vendor Specific: Not Supported 00:09:21.530 Reset Timeout: 7500 ms 00:09:21.530 Doorbell Stride: 4 bytes 00:09:21.530 NVM Subsystem Reset: Not Supported 00:09:21.530 Command Sets Supported 00:09:21.530 NVM Command Set: Supported 00:09:21.530 Boot Partition: Not Supported 00:09:21.530 Memory Page Size Minimum: 4096 bytes 00:09:21.530 Memory Page Size Maximum: 65536 bytes 00:09:21.530 Persistent Memory Region: Not Supported 00:09:21.530 Optional Asynchronous Events Supported 00:09:21.530 Namespace Attribute Notices: Supported 00:09:21.530 Firmware Activation Notices: Not Supported 00:09:21.530 ANA Change Notices: Not Supported 00:09:21.530 PLE Aggregate Log Change Notices: Not Supported 00:09:21.530 LBA Status Info Alert Notices: Not Supported 00:09:21.530 EGE Aggregate Log Change Notices: Not Supported 00:09:21.530 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.530 Zone Descriptor Change Notices: Not Supported 00:09:21.530 Discovery Log Change Notices: Not Supported 00:09:21.530 Controller Attributes 00:09:21.530 128-bit Host Identifier: Not Supported 00:09:21.530 Non-Operational Permissive Mode: Not Supported 00:09:21.530 NVM Sets: Not Supported 00:09:21.530 Read Recovery Levels: Not Supported 00:09:21.530 Endurance Groups: Not Supported 00:09:21.530 Predictable Latency Mode: Not Supported 00:09:21.530 Traffic Based Keep ALive: Not Supported 00:09:21.530 Namespace Granularity: Not Supported 00:09:21.530 SQ Associations: Not Supported 00:09:21.530 UUID List: Not Supported 00:09:21.530 Multi-Domain Subsystem: Not Supported 00:09:21.530 Fixed Capacity Management: Not Supported 00:09:21.530 Variable Capacity Management: Not Supported 00:09:21.530 Delete Endurance Group: Not Supported 00:09:21.530 Delete NVM Set: Not Supported 00:09:21.530 Extended LBA Formats Supported: Supported 00:09:21.530 Flexible Data Placement Supported: Not Supported 00:09:21.530 00:09:21.530 Controller Memory Buffer Support 00:09:21.530 ================================ 00:09:21.530 Supported: No 00:09:21.530 00:09:21.530 Persistent Memory Region Support 00:09:21.530 ================================ 00:09:21.530 Supported: No 00:09:21.530 00:09:21.530 Admin Command Set Attributes 00:09:21.530 ============================ 00:09:21.530 Security Send/Receive: Not Supported 00:09:21.530 Format NVM: Supported 00:09:21.530 Firmware Activate/Download: Not Supported 00:09:21.530 Namespace Management: Supported 00:09:21.530 Device Self-Test: Not Supported 00:09:21.530 Directives: Supported 00:09:21.530 NVMe-MI: Not Supported 00:09:21.530 Virtualization Management: Not Supported 00:09:21.530 Doorbell Buffer Config: Supported 00:09:21.530 Get LBA Status Capability: Not Supported 00:09:21.530 Command & Feature Lockdown Capability: Not Supported 00:09:21.531 Abort Command Limit: 4 00:09:21.531 Async Event Request Limit: 4 00:09:21.531 Number of Firmware Slots: N/A 00:09:21.531 Firmware Slot 1 Read-Only: N/A 00:09:21.531 Firmware Activation Without Reset: N/A 00:09:21.531 Multiple Update Detection Support: N/A 00:09:21.531 Firmware Update Granularity: No Information Provided 00:09:21.531 Per-Namespace SMART Log: Yes 00:09:21.531 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.531 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:21.531 Command Effects Log Page: Supported 00:09:21.531 Get Log Page Extended Data: Supported 00:09:21.531 Telemetry Log Pages: Not Supported 00:09:21.531 Persistent Event Log Pages: Not Supported 00:09:21.531 Supported Log Pages Log Page: May Support 00:09:21.531 Commands Supported & Effects Log Page: Not Supported 00:09:21.531 Feature Identifiers & Effects Log Page:May Support 00:09:21.531 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.531 Data Area 4 for Telemetry Log: Not Supported 00:09:21.531 Error Log Page Entries Supported: 1 00:09:21.531 Keep Alive: Not Supported 00:09:21.531 00:09:21.531 NVM Command Set Attributes 00:09:21.531 ========================== 00:09:21.531 Submission Queue Entry Size 00:09:21.531 Max: 64 00:09:21.531 Min: 64 00:09:21.531 Completion Queue Entry Size 00:09:21.531 Max: 16 00:09:21.531 Min: 16 00:09:21.531 Number of Namespaces: 256 00:09:21.531 Compare Command: Supported 00:09:21.531 Write Uncorrectable Command: Not Supported 00:09:21.531 Dataset Management Command: Supported 00:09:21.531 Write Zeroes Command: Supported 00:09:21.531 Set Features Save Field: Supported 00:09:21.531 Reservations: Not Supported 00:09:21.531 Timestamp: Supported 00:09:21.531 Copy: Supported 00:09:21.531 Volatile Write Cache: Present 00:09:21.531 Atomic Write Unit (Normal): 1 00:09:21.531 Atomic Write Unit (PFail): 1 00:09:21.531 Atomic Compare & Write Unit: 1 00:09:21.531 Fused Compare & Write: Not Supported 00:09:21.531 Scatter-Gather List 00:09:21.531 SGL Command Set: Supported 00:09:21.531 SGL Keyed: Not Supported 00:09:21.531 SGL Bit Bucket Descriptor: Not Supported 00:09:21.531 SGL Metadata Pointer: Not Supported 00:09:21.531 Oversized SGL: Not Supported 00:09:21.531 SGL Metadata Address: Not Supported 00:09:21.531 SGL Offset: Not Supported 00:09:21.531 Transport SGL Data Block: Not Supported 00:09:21.531 Replay Protected Memory Block: Not Supported 00:09:21.531 00:09:21.531 Firmware Slot Information 00:09:21.531 ========================= 00:09:21.531 Active slot: 1 00:09:21.531 Slot 1 Firmware Revision: 1.0 00:09:21.531 00:09:21.531 00:09:21.531 Commands Supported and Effects 00:09:21.531 ============================== 00:09:21.531 Admin Commands 00:09:21.531 -------------- 00:09:21.531 Delete I/O Submission Queue (00h): Supported 00:09:21.531 Create I/O Submission Queue (01h): Supported 00:09:21.531 Get Log Page (02h): Supported 00:09:21.531 Delete I/O Completion Queue (04h): Supported 00:09:21.531 Create I/O Completion Queue (05h): Supported 00:09:21.531 Identify (06h): Supported 00:09:21.531 Abort (08h): Supported 00:09:21.531 Set Features (09h): Supported 00:09:21.531 Get Features (0Ah): Supported 00:09:21.531 Asynchronous Event Request (0Ch): Supported 00:09:21.531 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.531 Directive Send (19h): Supported 00:09:21.531 Directive Receive (1Ah): Supported 00:09:21.531 Virtualization Management (1Ch): Supported 00:09:21.531 Doorbell Buffer Config (7Ch): Supported 00:09:21.531 Format NVM (80h): Supported LBA-Change 00:09:21.531 I/O Commands 00:09:21.531 ------------ 00:09:21.531 Flush (00h): Supported LBA-Change 00:09:21.531 Write (01h): Supported LBA-Change 00:09:21.531 Read (02h): Supported 00:09:21.531 Compare (05h): Supported 00:09:21.531 Write Zeroes (08h): Supported LBA-Change 00:09:21.531 Dataset Management (09h): Supported LBA-Change 00:09:21.531 Unknown (0Ch): Supported 00:09:21.531 Unknown (12h): Supported 00:09:21.531 Copy (19h): Supported LBA-Change 00:09:21.531 Unknown (1Dh): Supported LBA-Change 00:09:21.531 00:09:21.531 Error Log 00:09:21.531 ========= 00:09:21.531 00:09:21.531 Arbitration 00:09:21.531 =========== 00:09:21.531 Arbitration Burst: no limit 00:09:21.531 00:09:21.531 Power Management 00:09:21.531 ================ 00:09:21.531 Number of Power States: 1 00:09:21.531 Current Power State: Power State #0 00:09:21.531 Power State #0: 00:09:21.531 Max Power: 25.00 W 00:09:21.531 Non-Operational State: Operational 00:09:21.531 Entry Latency: 16 microseconds 00:09:21.531 Exit Latency: 4 microseconds 00:09:21.531 Relative Read Throughput: 0 00:09:21.531 Relative Read Latency: 0 00:09:21.531 Relative Write Throughput: 0 00:09:21.531 Relative Write Latency: 0 00:09:21.531 Idle Power: Not Reported 00:09:21.531 Active Power: Not Reported 00:09:21.531 Non-Operational Permissive Mode: Not Supported 00:09:21.531 00:09:21.531 Health Information 00:09:21.531 ================== 00:09:21.531 Critical Warnings: 00:09:21.531 Available Spare Space: OK 00:09:21.531 Temperature: OK 00:09:21.531 Device Reliability: OK 00:09:21.531 Read Only: No 00:09:21.531 Volatile Memory Backup: OK 00:09:21.531 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.531 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.531 Available Spare: 0% 00:09:21.531 Available Spare Threshold: 0% 00:09:21.531 Life Percentage Used: 0% 00:09:21.531 Data Units Read: 752 00:09:21.531 Data Units Written: 603 00:09:21.531 Host Read Commands: 34765 00:09:21.531 Host Write Commands: 32543 00:09:21.531 Controller Busy Time: 0 minutes 00:09:21.531 Power Cycles: 0 00:09:21.531 Power On Hours: 0 hours 00:09:21.531 Unsafe Shutdowns: 0 00:09:21.531 Unrecoverable Media Errors: 0 00:09:21.531 Lifetime Error Log Entries: 0 00:09:21.531 Warning Temperature Time: 0 minutes 00:09:21.531 Critical Temperature Time: 0 minutes 00:09:21.531 00:09:21.531 Number of Queues 00:09:21.531 ================ 00:09:21.531 Number of I/O Submission Queues: 64 00:09:21.531 Number of I/O Completion Queues: 64 00:09:21.531 00:09:21.531 ZNS Specific Controller Data 00:09:21.531 ============================ 00:09:21.531 Zone Append Size Limit: 0 00:09:21.531 00:09:21.531 00:09:21.531 Active Namespaces 00:09:21.531 ================= 00:09:21.531 Namespace ID:1 00:09:21.531 Error Recovery Timeout: Unlimited 00:09:21.531 Command Set Identifier: NVM (00h) 00:09:21.531 Deallocate: Supported 00:09:21.531 Deallocated/Unwritten Error: Supported 00:09:21.531 Deallocated Read Value: All 0x00 00:09:21.531 Deallocate in Write Zeroes: Not Supported 00:09:21.531 Deallocated Guard Field: 0xFFFF 00:09:21.531 Flush: Supported 00:09:21.531 Reservation: Not Supported 00:09:21.531 Namespace Sharing Capabilities: Private 00:09:21.531 Size (in LBAs): 1310720 (5GiB) 00:09:21.531 Capacity (in LBAs): 1310720 (5GiB) 00:09:21.531 Utilization (in LBAs): 1310720 (5GiB) 00:09:21.531 Thin Provisioning: Not Supported 00:09:21.531 Per-NS Atomic Units: No 00:09:21.531 Maximum Single Source Range Length: 128 00:09:21.531 Maximum Copy Length: 128 00:09:21.531 Maximum Source Range Count: 128 00:09:21.531 NGUID/EUI64 Never Reused: No 00:09:21.531 Namespace Write Protected: No 00:09:21.531 Number of LBA Formats: 8 00:09:21.531 Current LBA Format: LBA Format #04 00:09:21.531 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.531 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.531 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.531 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.531 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.531 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.531 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.531 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.531 00:09:21.531 ===================================================== 00:09:21.531 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.531 ===================================================== 00:09:21.531 Controller Capabilities/Features 00:09:21.531 ================================ 00:09:21.531 Vendor ID: 1b36 00:09:21.531 Subsystem Vendor ID: 1af4 00:09:21.531 Serial Number: 12343 00:09:21.531 Model Number: QEMU NVMe Ctrl 00:09:21.531 Firmware Version: 8.0.0 00:09:21.531 Recommended Arb Burst: 6 00:09:21.531 IEEE OUI Identifier: 00 54 52 00:09:21.531 Multi-path I/O 00:09:21.531 May have multiple subsystem ports: No 00:09:21.531 May have multiple controllers: Yes 00:09:21.531 Associated with SR-IOV VF: No 00:09:21.531 Max Data Transfer Size: 524288 00:09:21.531 Max Number of Namespaces: 256 00:09:21.531 Max Number of I/O Queues: 64 00:09:21.531 NVMe Specification Version (VS): 1.4 00:09:21.532 NVMe Specification Version (Identify): 1.4 00:09:21.532 Maximum Queue Entries: 2048 00:09:21.532 Contiguous Queues Required: Yes 00:09:21.532 Arbitration Mechanisms Supported 00:09:21.532 Weighted Round Robin: Not Supported 00:09:21.532 Vendor Specific: Not Supported 00:09:21.532 Reset Timeout: 7500 ms 00:09:21.532 Doorbell Stride: 4 bytes 00:09:21.532 NVM Subsystem Reset: Not Supported 00:09:21.532 Command Sets Supported 00:09:21.532 NVM Command Set: Supported 00:09:21.532 Boot Partition: Not Supported 00:09:21.532 Memory Page Size Minimum: 4096 bytes 00:09:21.532 Memory Page Size Maximum: 65536 bytes 00:09:21.532 Persistent Memory Region: Not Supported 00:09:21.532 Optional Asynchronous Events Supported 00:09:21.532 Namespace Attribute Notices: Supported 00:09:21.532 Firmware Activation Notices: Not Supported 00:09:21.532 ANA Change Notices: Not Supported 00:09:21.532 PLE Aggregate Log Change Notices: Not Supported 00:09:21.532 LBA Status Info Alert Notices: Not Supported 00:09:21.532 EGE Aggregate Log Change Notices: Not Supported 00:09:21.532 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.532 Zone Descriptor Change Notices: Not Supported 00:09:21.532 Discovery Log Change Notices: Not Supported 00:09:21.532 Controller Attributes 00:09:21.532 128-bit Host Identifier: Not Supported 00:09:21.532 Non-Operational Permissive Mode: Not Supported 00:09:21.532 NVM Sets: Not Supported 00:09:21.532 Read Recovery Levels: Not Supported 00:09:21.532 Endurance Groups: Supported 00:09:21.532 Predictable Latency Mode: Not Supported 00:09:21.532 Traffic Based Keep ALive: Not Supported 00:09:21.532 Namespace Granularity: Not Supported 00:09:21.532 SQ Associations: Not Supported 00:09:21.532 UUID List: Not Supported 00:09:21.532 Multi-Domain Subsystem: Not Supported 00:09:21.532 Fixed Capacity Manag[2024-05-14 02:56:07.348799] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 81302 terminated unexpected 00:09:21.532 [2024-05-14 02:56:07.350110] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 81302 terminated unexpected 00:09:21.532 [2024-05-14 02:56:07.350864] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 81302 terminated unexpected 00:09:21.532 ement: Not Supported 00:09:21.532 Variable Capacity Management: Not Supported 00:09:21.532 Delete Endurance Group: Not Supported 00:09:21.532 Delete NVM Set: Not Supported 00:09:21.532 Extended LBA Formats Supported: Supported 00:09:21.532 Flexible Data Placement Supported: Supported 00:09:21.532 00:09:21.532 Controller Memory Buffer Support 00:09:21.532 ================================ 00:09:21.532 Supported: No 00:09:21.532 00:09:21.532 Persistent Memory Region Support 00:09:21.532 ================================ 00:09:21.532 Supported: No 00:09:21.532 00:09:21.532 Admin Command Set Attributes 00:09:21.532 ============================ 00:09:21.532 Security Send/Receive: Not Supported 00:09:21.532 Format NVM: Supported 00:09:21.532 Firmware Activate/Download: Not Supported 00:09:21.532 Namespace Management: Supported 00:09:21.532 Device Self-Test: Not Supported 00:09:21.532 Directives: Supported 00:09:21.532 NVMe-MI: Not Supported 00:09:21.532 Virtualization Management: Not Supported 00:09:21.532 Doorbell Buffer Config: Supported 00:09:21.532 Get LBA Status Capability: Not Supported 00:09:21.532 Command & Feature Lockdown Capability: Not Supported 00:09:21.532 Abort Command Limit: 4 00:09:21.532 Async Event Request Limit: 4 00:09:21.532 Number of Firmware Slots: N/A 00:09:21.532 Firmware Slot 1 Read-Only: N/A 00:09:21.532 Firmware Activation Without Reset: N/A 00:09:21.532 Multiple Update Detection Support: N/A 00:09:21.532 Firmware Update Granularity: No Information Provided 00:09:21.532 Per-Namespace SMART Log: Yes 00:09:21.532 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.532 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.532 Command Effects Log Page: Supported 00:09:21.532 Get Log Page Extended Data: Supported 00:09:21.532 Telemetry Log Pages: Not Supported 00:09:21.532 Persistent Event Log Pages: Not Supported 00:09:21.532 Supported Log Pages Log Page: May Support 00:09:21.532 Commands Supported & Effects Log Page: Not Supported 00:09:21.532 Feature Identifiers & Effects Log Page:May Support 00:09:21.532 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.532 Data Area 4 for Telemetry Log: Not Supported 00:09:21.532 Error Log Page Entries Supported: 1 00:09:21.532 Keep Alive: Not Supported 00:09:21.532 00:09:21.532 NVM Command Set Attributes 00:09:21.532 ========================== 00:09:21.532 Submission Queue Entry Size 00:09:21.532 Max: 64 00:09:21.532 Min: 64 00:09:21.532 Completion Queue Entry Size 00:09:21.532 Max: 16 00:09:21.532 Min: 16 00:09:21.532 Number of Namespaces: 256 00:09:21.532 Compare Command: Supported 00:09:21.532 Write Uncorrectable Command: Not Supported 00:09:21.532 Dataset Management Command: Supported 00:09:21.532 Write Zeroes Command: Supported 00:09:21.532 Set Features Save Field: Supported 00:09:21.532 Reservations: Not Supported 00:09:21.532 Timestamp: Supported 00:09:21.532 Copy: Supported 00:09:21.532 Volatile Write Cache: Present 00:09:21.532 Atomic Write Unit (Normal): 1 00:09:21.532 Atomic Write Unit (PFail): 1 00:09:21.532 Atomic Compare & Write Unit: 1 00:09:21.532 Fused Compare & Write: Not Supported 00:09:21.532 Scatter-Gather List 00:09:21.532 SGL Command Set: Supported 00:09:21.532 SGL Keyed: Not Supported 00:09:21.532 SGL Bit Bucket Descriptor: Not Supported 00:09:21.532 SGL Metadata Pointer: Not Supported 00:09:21.532 Oversized SGL: Not Supported 00:09:21.532 SGL Metadata Address: Not Supported 00:09:21.532 SGL Offset: Not Supported 00:09:21.532 Transport SGL Data Block: Not Supported 00:09:21.532 Replay Protected Memory Block: Not Supported 00:09:21.532 00:09:21.532 Firmware Slot Information 00:09:21.532 ========================= 00:09:21.532 Active slot: 1 00:09:21.532 Slot 1 Firmware Revision: 1.0 00:09:21.532 00:09:21.532 00:09:21.532 Commands Supported and Effects 00:09:21.532 ============================== 00:09:21.532 Admin Commands 00:09:21.532 -------------- 00:09:21.532 Delete I/O Submission Queue (00h): Supported 00:09:21.532 Create I/O Submission Queue (01h): Supported 00:09:21.532 Get Log Page (02h): Supported 00:09:21.532 Delete I/O Completion Queue (04h): Supported 00:09:21.532 Create I/O Completion Queue (05h): Supported 00:09:21.532 Identify (06h): Supported 00:09:21.532 Abort (08h): Supported 00:09:21.532 Set Features (09h): Supported 00:09:21.532 Get Features (0Ah): Supported 00:09:21.532 Asynchronous Event Request (0Ch): Supported 00:09:21.532 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.532 Directive Send (19h): Supported 00:09:21.532 Directive Receive (1Ah): Supported 00:09:21.532 Virtualization Management (1Ch): Supported 00:09:21.532 Doorbell Buffer Config (7Ch): Supported 00:09:21.532 Format NVM (80h): Supported LBA-Change 00:09:21.532 I/O Commands 00:09:21.532 ------------ 00:09:21.532 Flush (00h): Supported LBA-Change 00:09:21.532 Write (01h): Supported LBA-Change 00:09:21.532 Read (02h): Supported 00:09:21.532 Compare (05h): Supported 00:09:21.532 Write Zeroes (08h): Supported LBA-Change 00:09:21.532 Dataset Management (09h): Supported LBA-Change 00:09:21.532 Unknown (0Ch): Supported 00:09:21.532 Unknown (12h): Supported 00:09:21.532 Copy (19h): Supported LBA-Change 00:09:21.532 Unknown (1Dh): Supported LBA-Change 00:09:21.532 00:09:21.532 Error Log 00:09:21.532 ========= 00:09:21.532 00:09:21.532 Arbitration 00:09:21.532 =========== 00:09:21.532 Arbitration Burst: no limit 00:09:21.532 00:09:21.532 Power Management 00:09:21.532 ================ 00:09:21.532 Number of Power States: 1 00:09:21.532 Current Power State: Power State #0 00:09:21.532 Power State #0: 00:09:21.532 Max Power: 25.00 W 00:09:21.532 Non-Operational State: Operational 00:09:21.532 Entry Latency: 16 microseconds 00:09:21.532 Exit Latency: 4 microseconds 00:09:21.532 Relative Read Throughput: 0 00:09:21.532 Relative Read Latency: 0 00:09:21.532 Relative Write Throughput: 0 00:09:21.532 Relative Write Latency: 0 00:09:21.532 Idle Power: Not Reported 00:09:21.532 Active Power: Not Reported 00:09:21.532 Non-Operational Permissive Mode: Not Supported 00:09:21.532 00:09:21.532 Health Information 00:09:21.532 ================== 00:09:21.532 Critical Warnings: 00:09:21.532 Available Spare Space: OK 00:09:21.532 Temperature: OK 00:09:21.532 Device Reliability: OK 00:09:21.532 Read Only: No 00:09:21.532 Volatile Memory Backup: OK 00:09:21.532 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.532 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.532 Available Spare: 0% 00:09:21.532 Available Spare Threshold: 0% 00:09:21.532 Life Percentage Used: 0% 00:09:21.532 Data Units Read: 788 00:09:21.532 Data Units Written: 681 00:09:21.533 Host Read Commands: 34508 00:09:21.533 Host Write Commands: 33098 00:09:21.533 Controller Busy Time: 0 minutes 00:09:21.533 Power Cycles: 0 00:09:21.533 Power On Hours: 0 hours 00:09:21.533 Unsafe Shutdowns: 0 00:09:21.533 Unrecoverable Media Errors: 0 00:09:21.533 Lifetime Error Log Entries: 0 00:09:21.533 Warning Temperature Time: 0 minutes 00:09:21.533 Critical Temperature Time: 0 minutes 00:09:21.533 00:09:21.533 Number of Queues 00:09:21.533 ================ 00:09:21.533 Number of I/O Submission Queues: 64 00:09:21.533 Number of I/O Completion Queues: 64 00:09:21.533 00:09:21.533 ZNS Specific Controller Data 00:09:21.533 ============================ 00:09:21.533 Zone Append Size Limit: 0 00:09:21.533 00:09:21.533 00:09:21.533 Active Namespaces 00:09:21.533 ================= 00:09:21.533 Namespace ID:1 00:09:21.533 Error Recovery Timeout: Unlimited 00:09:21.533 Command Set Identifier: NVM (00h) 00:09:21.533 Deallocate: Supported 00:09:21.533 Deallocated/Unwritten Error: Supported 00:09:21.533 Deallocated Read Value: All 0x00 00:09:21.533 Deallocate in Write Zeroes: Not Supported 00:09:21.533 Deallocated Guard Field: 0xFFFF 00:09:21.533 Flush: Supported 00:09:21.533 Reservation: Not Supported 00:09:21.533 Namespace Sharing Capabilities: Multiple Controllers 00:09:21.533 Size (in LBAs): 262144 (1GiB) 00:09:21.533 Capacity (in LBAs): 262144 (1GiB) 00:09:21.533 Utilization (in LBAs): 262144 (1GiB) 00:09:21.533 Thin Provisioning: Not Supported 00:09:21.533 Per-NS Atomic Units: No 00:09:21.533 Maximum Single Source Range Length: 128 00:09:21.533 Maximum Copy Length: 128 00:09:21.533 Maximum Source Range Count: 128 00:09:21.533 NGUID/EUI64 Never Reused: No 00:09:21.533 Namespace Write Protected: No 00:09:21.533 Endurance group ID: 1 00:09:21.533 Number of LBA Formats: 8 00:09:21.533 Current LBA Format: LBA Format #04 00:09:21.533 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.533 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.533 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.533 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.533 LBA Format #06: Data Size[2024-05-14 02:56:07.353346] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 81302 terminated unexpected 00:09:21.533 : 4096 Metadata Size: 16 00:09:21.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.533 00:09:21.533 Get Feature FDP: 00:09:21.533 ================ 00:09:21.533 Enabled: Yes 00:09:21.533 FDP configuration index: 0 00:09:21.533 00:09:21.533 FDP configurations log page 00:09:21.533 =========================== 00:09:21.533 Number of FDP configurations: 1 00:09:21.533 Version: 0 00:09:21.533 Size: 112 00:09:21.533 FDP Configuration Descriptor: 0 00:09:21.533 Descriptor Size: 96 00:09:21.533 Reclaim Group Identifier format: 2 00:09:21.533 FDP Volatile Write Cache: Not Present 00:09:21.533 FDP Configuration: Valid 00:09:21.533 Vendor Specific Size: 0 00:09:21.533 Number of Reclaim Groups: 2 00:09:21.533 Number of Recalim Unit Handles: 8 00:09:21.533 Max Placement Identifiers: 128 00:09:21.533 Number of Namespaces Suppprted: 256 00:09:21.533 Reclaim unit Nominal Size: 6000000 bytes 00:09:21.533 Estimated Reclaim Unit Time Limit: Not Reported 00:09:21.533 RUH Desc #000: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #001: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #002: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #003: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #004: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #005: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #006: RUH Type: Initially Isolated 00:09:21.533 RUH Desc #007: RUH Type: Initially Isolated 00:09:21.533 00:09:21.533 FDP reclaim unit handle usage log page 00:09:21.533 ====================================== 00:09:21.533 Number of Reclaim Unit Handles: 8 00:09:21.533 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:21.533 RUH Usage Desc #001: RUH Attributes: Unused 00:09:21.533 RUH Usage Desc #002: RUH Attributes: Unused 00:09:21.533 RUH Usage Desc #003: RUH Attributes: Unused 00:09:21.533 RUH Usage Desc #004: RUH Attributes: Unused 00:09:21.533 RUH Usage Desc #005: RUH Attributes: Unused 00:09:21.533 RUH Usage Desc #006: RUH Attributes: Unused 00:09:21.533 RUH Usage Desc #007: RUH Attributes: Unused 00:09:21.533 00:09:21.533 FDP statistics log page 00:09:21.533 ======================= 00:09:21.533 Host bytes with metadata written: 432513024 00:09:21.533 Media bytes with metadata written: 432558080 00:09:21.533 Media bytes erased: 0 00:09:21.533 00:09:21.533 FDP events log page 00:09:21.533 =================== 00:09:21.533 Number of FDP events: 0 00:09:21.533 00:09:21.533 ===================================================== 00:09:21.533 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.533 ===================================================== 00:09:21.533 Controller Capabilities/Features 00:09:21.533 ================================ 00:09:21.533 Vendor ID: 1b36 00:09:21.533 Subsystem Vendor ID: 1af4 00:09:21.533 Serial Number: 12342 00:09:21.533 Model Number: QEMU NVMe Ctrl 00:09:21.533 Firmware Version: 8.0.0 00:09:21.533 Recommended Arb Burst: 6 00:09:21.533 IEEE OUI Identifier: 00 54 52 00:09:21.533 Multi-path I/O 00:09:21.533 May have multiple subsystem ports: No 00:09:21.533 May have multiple controllers: No 00:09:21.533 Associated with SR-IOV VF: No 00:09:21.533 Max Data Transfer Size: 524288 00:09:21.533 Max Number of Namespaces: 256 00:09:21.533 Max Number of I/O Queues: 64 00:09:21.533 NVMe Specification Version (VS): 1.4 00:09:21.533 NVMe Specification Version (Identify): 1.4 00:09:21.533 Maximum Queue Entries: 2048 00:09:21.533 Contiguous Queues Required: Yes 00:09:21.533 Arbitration Mechanisms Supported 00:09:21.533 Weighted Round Robin: Not Supported 00:09:21.533 Vendor Specific: Not Supported 00:09:21.533 Reset Timeout: 7500 ms 00:09:21.533 Doorbell Stride: 4 bytes 00:09:21.533 NVM Subsystem Reset: Not Supported 00:09:21.533 Command Sets Supported 00:09:21.533 NVM Command Set: Supported 00:09:21.533 Boot Partition: Not Supported 00:09:21.533 Memory Page Size Minimum: 4096 bytes 00:09:21.533 Memory Page Size Maximum: 65536 bytes 00:09:21.533 Persistent Memory Region: Not Supported 00:09:21.533 Optional Asynchronous Events Supported 00:09:21.533 Namespace Attribute Notices: Supported 00:09:21.533 Firmware Activation Notices: Not Supported 00:09:21.533 ANA Change Notices: Not Supported 00:09:21.533 PLE Aggregate Log Change Notices: Not Supported 00:09:21.533 LBA Status Info Alert Notices: Not Supported 00:09:21.533 EGE Aggregate Log Change Notices: Not Supported 00:09:21.533 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.533 Zone Descriptor Change Notices: Not Supported 00:09:21.533 Discovery Log Change Notices: Not Supported 00:09:21.533 Controller Attributes 00:09:21.533 128-bit Host Identifier: Not Supported 00:09:21.533 Non-Operational Permissive Mode: Not Supported 00:09:21.533 NVM Sets: Not Supported 00:09:21.533 Read Recovery Levels: Not Supported 00:09:21.533 Endurance Groups: Not Supported 00:09:21.533 Predictable Latency Mode: Not Supported 00:09:21.533 Traffic Based Keep ALive: Not Supported 00:09:21.533 Namespace Granularity: Not Supported 00:09:21.533 SQ Associations: Not Supported 00:09:21.533 UUID List: Not Supported 00:09:21.533 Multi-Domain Subsystem: Not Supported 00:09:21.533 Fixed Capacity Management: Not Supported 00:09:21.533 Variable Capacity Management: Not Supported 00:09:21.533 Delete Endurance Group: Not Supported 00:09:21.533 Delete NVM Set: Not Supported 00:09:21.533 Extended LBA Formats Supported: Supported 00:09:21.533 Flexible Data Placement Supported: Not Supported 00:09:21.533 00:09:21.533 Controller Memory Buffer Support 00:09:21.533 ================================ 00:09:21.533 Supported: No 00:09:21.533 00:09:21.533 Persistent Memory Region Support 00:09:21.533 ================================ 00:09:21.533 Supported: No 00:09:21.533 00:09:21.533 Admin Command Set Attributes 00:09:21.533 ============================ 00:09:21.533 Security Send/Receive: Not Supported 00:09:21.533 Format NVM: Supported 00:09:21.533 Firmware Activate/Download: Not Supported 00:09:21.533 Namespace Management: Supported 00:09:21.533 Device Self-Test: Not Supported 00:09:21.533 Directives: Supported 00:09:21.533 NVMe-MI: Not Supported 00:09:21.533 Virtualization Management: Not Supported 00:09:21.533 Doorbell Buffer Config: Supported 00:09:21.533 Get LBA Status Capability: Not Supported 00:09:21.533 Command & Feature Lockdown Capability: Not Supported 00:09:21.533 Abort Command Limit: 4 00:09:21.533 Async Event Request Limit: 4 00:09:21.533 Number of Firmware Slots: N/A 00:09:21.533 Firmware Slot 1 Read-Only: N/A 00:09:21.534 Firmware Activation Without Reset: N/A 00:09:21.534 Multiple Update Detection Support: N/A 00:09:21.534 Firmware Update Granularity: No Information Provided 00:09:21.534 Per-Namespace SMART Log: Yes 00:09:21.534 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.534 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:21.534 Command Effects Log Page: Supported 00:09:21.534 Get Log Page Extended Data: Supported 00:09:21.534 Telemetry Log Pages: Not Supported 00:09:21.534 Persistent Event Log Pages: Not Supported 00:09:21.534 Supported Log Pages Log Page: May Support 00:09:21.534 Commands Supported & Effects Log Page: Not Supported 00:09:21.534 Feature Identifiers & Effects Log Page:May Support 00:09:21.534 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.534 Data Area 4 for Telemetry Log: Not Supported 00:09:21.534 Error Log Page Entries Supported: 1 00:09:21.534 Keep Alive: Not Supported 00:09:21.534 00:09:21.534 NVM Command Set Attributes 00:09:21.534 ========================== 00:09:21.534 Submission Queue Entry Size 00:09:21.534 Max: 64 00:09:21.534 Min: 64 00:09:21.534 Completion Queue Entry Size 00:09:21.534 Max: 16 00:09:21.534 Min: 16 00:09:21.534 Number of Namespaces: 256 00:09:21.534 Compare Command: Supported 00:09:21.534 Write Uncorrectable Command: Not Supported 00:09:21.534 Dataset Management Command: Supported 00:09:21.534 Write Zeroes Command: Supported 00:09:21.534 Set Features Save Field: Supported 00:09:21.534 Reservations: Not Supported 00:09:21.534 Timestamp: Supported 00:09:21.534 Copy: Supported 00:09:21.534 Volatile Write Cache: Present 00:09:21.534 Atomic Write Unit (Normal): 1 00:09:21.534 Atomic Write Unit (PFail): 1 00:09:21.534 Atomic Compare & Write Unit: 1 00:09:21.534 Fused Compare & Write: Not Supported 00:09:21.534 Scatter-Gather List 00:09:21.534 SGL Command Set: Supported 00:09:21.534 SGL Keyed: Not Supported 00:09:21.534 SGL Bit Bucket Descriptor: Not Supported 00:09:21.534 SGL Metadata Pointer: Not Supported 00:09:21.534 Oversized SGL: Not Supported 00:09:21.534 SGL Metadata Address: Not Supported 00:09:21.534 SGL Offset: Not Supported 00:09:21.534 Transport SGL Data Block: Not Supported 00:09:21.534 Replay Protected Memory Block: Not Supported 00:09:21.534 00:09:21.534 Firmware Slot Information 00:09:21.534 ========================= 00:09:21.534 Active slot: 1 00:09:21.534 Slot 1 Firmware Revision: 1.0 00:09:21.534 00:09:21.534 00:09:21.534 Commands Supported and Effects 00:09:21.534 ============================== 00:09:21.534 Admin Commands 00:09:21.534 -------------- 00:09:21.534 Delete I/O Submission Queue (00h): Supported 00:09:21.534 Create I/O Submission Queue (01h): Supported 00:09:21.534 Get Log Page (02h): Supported 00:09:21.534 Delete I/O Completion Queue (04h): Supported 00:09:21.534 Create I/O Completion Queue (05h): Supported 00:09:21.534 Identify (06h): Supported 00:09:21.534 Abort (08h): Supported 00:09:21.534 Set Features (09h): Supported 00:09:21.534 Get Features (0Ah): Supported 00:09:21.534 Asynchronous Event Request (0Ch): Supported 00:09:21.534 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.534 Directive Send (19h): Supported 00:09:21.534 Directive Receive (1Ah): Supported 00:09:21.534 Virtualization Management (1Ch): Supported 00:09:21.534 Doorbell Buffer Config (7Ch): Supported 00:09:21.534 Format NVM (80h): Supported LBA-Change 00:09:21.534 I/O Commands 00:09:21.534 ------------ 00:09:21.534 Flush (00h): Supported LBA-Change 00:09:21.534 Write (01h): Supported LBA-Change 00:09:21.534 Read (02h): Supported 00:09:21.534 Compare (05h): Supported 00:09:21.534 Write Zeroes (08h): Supported LBA-Change 00:09:21.534 Dataset Management (09h): Supported LBA-Change 00:09:21.534 Unknown (0Ch): Supported 00:09:21.534 Unknown (12h): Supported 00:09:21.534 Copy (19h): Supported LBA-Change 00:09:21.534 Unknown (1Dh): Supported LBA-Change 00:09:21.534 00:09:21.534 Error Log 00:09:21.534 ========= 00:09:21.534 00:09:21.534 Arbitration 00:09:21.534 =========== 00:09:21.534 Arbitration Burst: no limit 00:09:21.534 00:09:21.534 Power Management 00:09:21.534 ================ 00:09:21.534 Number of Power States: 1 00:09:21.534 Current Power State: Power State #0 00:09:21.534 Power State #0: 00:09:21.534 Max Power: 25.00 W 00:09:21.534 Non-Operational State: Operational 00:09:21.534 Entry Latency: 16 microseconds 00:09:21.534 Exit Latency: 4 microseconds 00:09:21.534 Relative Read Throughput: 0 00:09:21.534 Relative Read Latency: 0 00:09:21.534 Relative Write Throughput: 0 00:09:21.534 Relative Write Latency: 0 00:09:21.534 Idle Power: Not Reported 00:09:21.534 Active Power: Not Reported 00:09:21.534 Non-Operational Permissive Mode: Not Supported 00:09:21.534 00:09:21.534 Health Information 00:09:21.534 ================== 00:09:21.534 Critical Warnings: 00:09:21.534 Available Spare Space: OK 00:09:21.534 Temperature: OK 00:09:21.534 Device Reliability: OK 00:09:21.534 Read Only: No 00:09:21.534 Volatile Memory Backup: OK 00:09:21.534 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.534 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.534 Available Spare: 0% 00:09:21.534 Available Spare Threshold: 0% 00:09:21.534 Life Percentage Used: 0% 00:09:21.534 Data Units Read: 2186 00:09:21.534 Data Units Written: 1866 00:09:21.534 Host Read Commands: 102142 00:09:21.534 Host Write Commands: 97912 00:09:21.534 Controller Busy Time: 0 minutes 00:09:21.534 Power Cycles: 0 00:09:21.534 Power On Hours: 0 hours 00:09:21.534 Unsafe Shutdowns: 0 00:09:21.534 Unrecoverable Media Errors: 0 00:09:21.534 Lifetime Error Log Entries: 0 00:09:21.534 Warning Temperature Time: 0 minutes 00:09:21.534 Critical Temperature Time: 0 minutes 00:09:21.534 00:09:21.534 Number of Queues 00:09:21.534 ================ 00:09:21.534 Number of I/O Submission Queues: 64 00:09:21.534 Number of I/O Completion Queues: 64 00:09:21.534 00:09:21.534 ZNS Specific Controller Data 00:09:21.534 ============================ 00:09:21.534 Zone Append Size Limit: 0 00:09:21.534 00:09:21.534 00:09:21.534 Active Namespaces 00:09:21.534 ================= 00:09:21.534 Namespace ID:1 00:09:21.534 Error Recovery Timeout: Unlimited 00:09:21.534 Command Set Identifier: NVM (00h) 00:09:21.534 Deallocate: Supported 00:09:21.534 Deallocated/Unwritten Error: Supported 00:09:21.534 Deallocated Read Value: All 0x00 00:09:21.534 Deallocate in Write Zeroes: Not Supported 00:09:21.534 Deallocated Guard Field: 0xFFFF 00:09:21.534 Flush: Supported 00:09:21.534 Reservation: Not Supported 00:09:21.534 Namespace Sharing Capabilities: Private 00:09:21.534 Size (in LBAs): 1048576 (4GiB) 00:09:21.534 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.534 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.534 Thin Provisioning: Not Supported 00:09:21.534 Per-NS Atomic Units: No 00:09:21.534 Maximum Single Source Range Length: 128 00:09:21.534 Maximum Copy Length: 128 00:09:21.534 Maximum Source Range Count: 128 00:09:21.534 NGUID/EUI64 Never Reused: No 00:09:21.534 Namespace Write Protected: No 00:09:21.534 Number of LBA Formats: 8 00:09:21.534 Current LBA Format: LBA Format #04 00:09:21.534 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.534 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.534 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.534 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.535 00:09:21.535 Namespace ID:2 00:09:21.535 Error Recovery Timeout: Unlimited 00:09:21.535 Command Set Identifier: NVM (00h) 00:09:21.535 Deallocate: Supported 00:09:21.535 Deallocated/Unwritten Error: Supported 00:09:21.535 Deallocated Read Value: All 0x00 00:09:21.535 Deallocate in Write Zeroes: Not Supported 00:09:21.535 Deallocated Guard Field: 0xFFFF 00:09:21.535 Flush: Supported 00:09:21.535 Reservation: Not Supported 00:09:21.535 Namespace Sharing Capabilities: Private 00:09:21.535 Size (in LBAs): 1048576 (4GiB) 00:09:21.535 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.535 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.535 Thin Provisioning: Not Supported 00:09:21.535 Per-NS Atomic Units: No 00:09:21.535 Maximum Single Source Range Length: 128 00:09:21.535 Maximum Copy Length: 128 00:09:21.535 Maximum Source Range Count: 128 00:09:21.535 NGUID/EUI64 Never Reused: No 00:09:21.535 Namespace Write Protected: No 00:09:21.535 Number of LBA Formats: 8 00:09:21.535 Current LBA Format: LBA Format #04 00:09:21.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.535 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.535 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.535 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.535 00:09:21.535 Namespace ID:3 00:09:21.535 Error Recovery Timeout: Unlimited 00:09:21.535 Command Set Identifier: NVM (00h) 00:09:21.535 Deallocate: Supported 00:09:21.535 Deallocated/Unwritten Error: Supported 00:09:21.535 Deallocated Read Value: All 0x00 00:09:21.535 Deallocate in Write Zeroes: Not Supported 00:09:21.535 Deallocated Guard Field: 0xFFFF 00:09:21.535 Flush: Supported 00:09:21.535 Reservation: Not Supported 00:09:21.535 Namespace Sharing Capabilities: Private 00:09:21.535 Size (in LBAs): 1048576 (4GiB) 00:09:21.535 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.535 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.535 Thin Provisioning: Not Supported 00:09:21.535 Per-NS Atomic Units: No 00:09:21.535 Maximum Single Source Range Length: 128 00:09:21.535 Maximum Copy Length: 128 00:09:21.535 Maximum Source Range Count: 128 00:09:21.535 NGUID/EUI64 Never Reused: No 00:09:21.535 Namespace Write Protected: No 00:09:21.535 Number of LBA Formats: 8 00:09:21.535 Current LBA Format: LBA Format #04 00:09:21.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.535 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.535 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.535 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.535 00:09:21.535 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:21.535 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:21.794 ===================================================== 00:09:21.794 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.794 ===================================================== 00:09:21.794 Controller Capabilities/Features 00:09:21.794 ================================ 00:09:21.794 Vendor ID: 1b36 00:09:21.794 Subsystem Vendor ID: 1af4 00:09:21.794 Serial Number: 12340 00:09:21.794 Model Number: QEMU NVMe Ctrl 00:09:21.794 Firmware Version: 8.0.0 00:09:21.794 Recommended Arb Burst: 6 00:09:21.795 IEEE OUI Identifier: 00 54 52 00:09:21.795 Multi-path I/O 00:09:21.795 May have multiple subsystem ports: No 00:09:21.795 May have multiple controllers: No 00:09:21.795 Associated with SR-IOV VF: No 00:09:21.795 Max Data Transfer Size: 524288 00:09:21.795 Max Number of Namespaces: 256 00:09:21.795 Max Number of I/O Queues: 64 00:09:21.795 NVMe Specification Version (VS): 1.4 00:09:21.795 NVMe Specification Version (Identify): 1.4 00:09:21.795 Maximum Queue Entries: 2048 00:09:21.795 Contiguous Queues Required: Yes 00:09:21.795 Arbitration Mechanisms Supported 00:09:21.795 Weighted Round Robin: Not Supported 00:09:21.795 Vendor Specific: Not Supported 00:09:21.795 Reset Timeout: 7500 ms 00:09:21.795 Doorbell Stride: 4 bytes 00:09:21.795 NVM Subsystem Reset: Not Supported 00:09:21.795 Command Sets Supported 00:09:21.795 NVM Command Set: Supported 00:09:21.795 Boot Partition: Not Supported 00:09:21.795 Memory Page Size Minimum: 4096 bytes 00:09:21.795 Memory Page Size Maximum: 65536 bytes 00:09:21.795 Persistent Memory Region: Not Supported 00:09:21.795 Optional Asynchronous Events Supported 00:09:21.795 Namespace Attribute Notices: Supported 00:09:21.795 Firmware Activation Notices: Not Supported 00:09:21.795 ANA Change Notices: Not Supported 00:09:21.795 PLE Aggregate Log Change Notices: Not Supported 00:09:21.795 LBA Status Info Alert Notices: Not Supported 00:09:21.795 EGE Aggregate Log Change Notices: Not Supported 00:09:21.795 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.795 Zone Descriptor Change Notices: Not Supported 00:09:21.795 Discovery Log Change Notices: Not Supported 00:09:21.795 Controller Attributes 00:09:21.795 128-bit Host Identifier: Not Supported 00:09:21.795 Non-Operational Permissive Mode: Not Supported 00:09:21.795 NVM Sets: Not Supported 00:09:21.795 Read Recovery Levels: Not Supported 00:09:21.795 Endurance Groups: Not Supported 00:09:21.795 Predictable Latency Mode: Not Supported 00:09:21.795 Traffic Based Keep ALive: Not Supported 00:09:21.795 Namespace Granularity: Not Supported 00:09:21.795 SQ Associations: Not Supported 00:09:21.795 UUID List: Not Supported 00:09:21.795 Multi-Domain Subsystem: Not Supported 00:09:21.795 Fixed Capacity Management: Not Supported 00:09:21.795 Variable Capacity Management: Not Supported 00:09:21.795 Delete Endurance Group: Not Supported 00:09:21.795 Delete NVM Set: Not Supported 00:09:21.795 Extended LBA Formats Supported: Supported 00:09:21.795 Flexible Data Placement Supported: Not Supported 00:09:21.795 00:09:21.795 Controller Memory Buffer Support 00:09:21.795 ================================ 00:09:21.795 Supported: No 00:09:21.795 00:09:21.795 Persistent Memory Region Support 00:09:21.795 ================================ 00:09:21.795 Supported: No 00:09:21.795 00:09:21.795 Admin Command Set Attributes 00:09:21.795 ============================ 00:09:21.795 Security Send/Receive: Not Supported 00:09:21.795 Format NVM: Supported 00:09:21.795 Firmware Activate/Download: Not Supported 00:09:21.795 Namespace Management: Supported 00:09:21.795 Device Self-Test: Not Supported 00:09:21.795 Directives: Supported 00:09:21.795 NVMe-MI: Not Supported 00:09:21.795 Virtualization Management: Not Supported 00:09:21.795 Doorbell Buffer Config: Supported 00:09:21.795 Get LBA Status Capability: Not Supported 00:09:21.795 Command & Feature Lockdown Capability: Not Supported 00:09:21.795 Abort Command Limit: 4 00:09:21.795 Async Event Request Limit: 4 00:09:21.795 Number of Firmware Slots: N/A 00:09:21.795 Firmware Slot 1 Read-Only: N/A 00:09:21.795 Firmware Activation Without Reset: N/A 00:09:21.795 Multiple Update Detection Support: N/A 00:09:21.795 Firmware Update Granularity: No Information Provided 00:09:21.795 Per-Namespace SMART Log: Yes 00:09:21.795 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.795 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:21.795 Command Effects Log Page: Supported 00:09:21.795 Get Log Page Extended Data: Supported 00:09:21.795 Telemetry Log Pages: Not Supported 00:09:21.795 Persistent Event Log Pages: Not Supported 00:09:21.795 Supported Log Pages Log Page: May Support 00:09:21.795 Commands Supported & Effects Log Page: Not Supported 00:09:21.795 Feature Identifiers & Effects Log Page:May Support 00:09:21.795 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.795 Data Area 4 for Telemetry Log: Not Supported 00:09:21.795 Error Log Page Entries Supported: 1 00:09:21.795 Keep Alive: Not Supported 00:09:21.795 00:09:21.795 NVM Command Set Attributes 00:09:21.795 ========================== 00:09:21.795 Submission Queue Entry Size 00:09:21.795 Max: 64 00:09:21.795 Min: 64 00:09:21.795 Completion Queue Entry Size 00:09:21.795 Max: 16 00:09:21.795 Min: 16 00:09:21.795 Number of Namespaces: 256 00:09:21.795 Compare Command: Supported 00:09:21.795 Write Uncorrectable Command: Not Supported 00:09:21.795 Dataset Management Command: Supported 00:09:21.795 Write Zeroes Command: Supported 00:09:21.795 Set Features Save Field: Supported 00:09:21.795 Reservations: Not Supported 00:09:21.795 Timestamp: Supported 00:09:21.795 Copy: Supported 00:09:21.795 Volatile Write Cache: Present 00:09:21.795 Atomic Write Unit (Normal): 1 00:09:21.795 Atomic Write Unit (PFail): 1 00:09:21.795 Atomic Compare & Write Unit: 1 00:09:21.795 Fused Compare & Write: Not Supported 00:09:21.795 Scatter-Gather List 00:09:21.795 SGL Command Set: Supported 00:09:21.795 SGL Keyed: Not Supported 00:09:21.795 SGL Bit Bucket Descriptor: Not Supported 00:09:21.795 SGL Metadata Pointer: Not Supported 00:09:21.795 Oversized SGL: Not Supported 00:09:21.795 SGL Metadata Address: Not Supported 00:09:21.795 SGL Offset: Not Supported 00:09:21.795 Transport SGL Data Block: Not Supported 00:09:21.795 Replay Protected Memory Block: Not Supported 00:09:21.795 00:09:21.795 Firmware Slot Information 00:09:21.795 ========================= 00:09:21.795 Active slot: 1 00:09:21.795 Slot 1 Firmware Revision: 1.0 00:09:21.795 00:09:21.795 00:09:21.795 Commands Supported and Effects 00:09:21.795 ============================== 00:09:21.795 Admin Commands 00:09:21.795 -------------- 00:09:21.795 Delete I/O Submission Queue (00h): Supported 00:09:21.795 Create I/O Submission Queue (01h): Supported 00:09:21.795 Get Log Page (02h): Supported 00:09:21.795 Delete I/O Completion Queue (04h): Supported 00:09:21.795 Create I/O Completion Queue (05h): Supported 00:09:21.795 Identify (06h): Supported 00:09:21.795 Abort (08h): Supported 00:09:21.795 Set Features (09h): Supported 00:09:21.795 Get Features (0Ah): Supported 00:09:21.795 Asynchronous Event Request (0Ch): Supported 00:09:21.795 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.795 Directive Send (19h): Supported 00:09:21.795 Directive Receive (1Ah): Supported 00:09:21.795 Virtualization Management (1Ch): Supported 00:09:21.795 Doorbell Buffer Config (7Ch): Supported 00:09:21.795 Format NVM (80h): Supported LBA-Change 00:09:21.795 I/O Commands 00:09:21.795 ------------ 00:09:21.795 Flush (00h): Supported LBA-Change 00:09:21.795 Write (01h): Supported LBA-Change 00:09:21.795 Read (02h): Supported 00:09:21.795 Compare (05h): Supported 00:09:21.795 Write Zeroes (08h): Supported LBA-Change 00:09:21.795 Dataset Management (09h): Supported LBA-Change 00:09:21.795 Unknown (0Ch): Supported 00:09:21.795 Unknown (12h): Supported 00:09:21.795 Copy (19h): Supported LBA-Change 00:09:21.795 Unknown (1Dh): Supported LBA-Change 00:09:21.795 00:09:21.795 Error Log 00:09:21.795 ========= 00:09:21.795 00:09:21.795 Arbitration 00:09:21.795 =========== 00:09:21.795 Arbitration Burst: no limit 00:09:21.795 00:09:21.795 Power Management 00:09:21.795 ================ 00:09:21.795 Number of Power States: 1 00:09:21.795 Current Power State: Power State #0 00:09:21.795 Power State #0: 00:09:21.795 Max Power: 25.00 W 00:09:21.795 Non-Operational State: Operational 00:09:21.795 Entry Latency: 16 microseconds 00:09:21.795 Exit Latency: 4 microseconds 00:09:21.795 Relative Read Throughput: 0 00:09:21.795 Relative Read Latency: 0 00:09:21.795 Relative Write Throughput: 0 00:09:21.795 Relative Write Latency: 0 00:09:21.795 Idle Power: Not Reported 00:09:21.795 Active Power: Not Reported 00:09:21.795 Non-Operational Permissive Mode: Not Supported 00:09:21.795 00:09:21.795 Health Information 00:09:21.795 ================== 00:09:21.795 Critical Warnings: 00:09:21.795 Available Spare Space: OK 00:09:21.795 Temperature: OK 00:09:21.795 Device Reliability: OK 00:09:21.795 Read Only: No 00:09:21.795 Volatile Memory Backup: OK 00:09:21.795 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.795 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.795 Available Spare: 0% 00:09:21.795 Available Spare Threshold: 0% 00:09:21.795 Life Percentage Used: 0% 00:09:21.795 Data Units Read: 1016 00:09:21.795 Data Units Written: 844 00:09:21.795 Host Read Commands: 48670 00:09:21.795 Host Write Commands: 47106 00:09:21.795 Controller Busy Time: 0 minutes 00:09:21.796 Power Cycles: 0 00:09:21.796 Power On Hours: 0 hours 00:09:21.796 Unsafe Shutdowns: 0 00:09:21.796 Unrecoverable Media Errors: 0 00:09:21.796 Lifetime Error Log Entries: 0 00:09:21.796 Warning Temperature Time: 0 minutes 00:09:21.796 Critical Temperature Time: 0 minutes 00:09:21.796 00:09:21.796 Number of Queues 00:09:21.796 ================ 00:09:21.796 Number of I/O Submission Queues: 64 00:09:21.796 Number of I/O Completion Queues: 64 00:09:21.796 00:09:21.796 ZNS Specific Controller Data 00:09:21.796 ============================ 00:09:21.796 Zone Append Size Limit: 0 00:09:21.796 00:09:21.796 00:09:21.796 Active Namespaces 00:09:21.796 ================= 00:09:21.796 Namespace ID:1 00:09:21.796 Error Recovery Timeout: Unlimited 00:09:21.796 Command Set Identifier: NVM (00h) 00:09:21.796 Deallocate: Supported 00:09:21.796 Deallocated/Unwritten Error: Supported 00:09:21.796 Deallocated Read Value: All 0x00 00:09:21.796 Deallocate in Write Zeroes: Not Supported 00:09:21.796 Deallocated Guard Field: 0xFFFF 00:09:21.796 Flush: Supported 00:09:21.796 Reservation: Not Supported 00:09:21.796 Metadata Transferred as: Separate Metadata Buffer 00:09:21.796 Namespace Sharing Capabilities: Private 00:09:21.796 Size (in LBAs): 1548666 (5GiB) 00:09:21.796 Capacity (in LBAs): 1548666 (5GiB) 00:09:21.796 Utilization (in LBAs): 1548666 (5GiB) 00:09:21.796 Thin Provisioning: Not Supported 00:09:21.796 Per-NS Atomic Units: No 00:09:21.796 Maximum Single Source Range Length: 128 00:09:21.796 Maximum Copy Length: 128 00:09:21.796 Maximum Source Range Count: 128 00:09:21.796 NGUID/EUI64 Never Reused: No 00:09:21.796 Namespace Write Protected: No 00:09:21.796 Number of LBA Formats: 8 00:09:21.796 Current LBA Format: LBA Format #07 00:09:21.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.796 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.796 00:09:21.796 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:21.796 02:56:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:22.055 ===================================================== 00:09:22.055 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.055 ===================================================== 00:09:22.055 Controller Capabilities/Features 00:09:22.055 ================================ 00:09:22.055 Vendor ID: 1b36 00:09:22.055 Subsystem Vendor ID: 1af4 00:09:22.055 Serial Number: 12341 00:09:22.055 Model Number: QEMU NVMe Ctrl 00:09:22.055 Firmware Version: 8.0.0 00:09:22.055 Recommended Arb Burst: 6 00:09:22.055 IEEE OUI Identifier: 00 54 52 00:09:22.055 Multi-path I/O 00:09:22.055 May have multiple subsystem ports: No 00:09:22.055 May have multiple controllers: No 00:09:22.055 Associated with SR-IOV VF: No 00:09:22.055 Max Data Transfer Size: 524288 00:09:22.055 Max Number of Namespaces: 256 00:09:22.055 Max Number of I/O Queues: 64 00:09:22.055 NVMe Specification Version (VS): 1.4 00:09:22.055 NVMe Specification Version (Identify): 1.4 00:09:22.055 Maximum Queue Entries: 2048 00:09:22.055 Contiguous Queues Required: Yes 00:09:22.055 Arbitration Mechanisms Supported 00:09:22.055 Weighted Round Robin: Not Supported 00:09:22.055 Vendor Specific: Not Supported 00:09:22.055 Reset Timeout: 7500 ms 00:09:22.055 Doorbell Stride: 4 bytes 00:09:22.055 NVM Subsystem Reset: Not Supported 00:09:22.055 Command Sets Supported 00:09:22.055 NVM Command Set: Supported 00:09:22.055 Boot Partition: Not Supported 00:09:22.055 Memory Page Size Minimum: 4096 bytes 00:09:22.055 Memory Page Size Maximum: 65536 bytes 00:09:22.055 Persistent Memory Region: Not Supported 00:09:22.055 Optional Asynchronous Events Supported 00:09:22.055 Namespace Attribute Notices: Supported 00:09:22.055 Firmware Activation Notices: Not Supported 00:09:22.055 ANA Change Notices: Not Supported 00:09:22.055 PLE Aggregate Log Change Notices: Not Supported 00:09:22.055 LBA Status Info Alert Notices: Not Supported 00:09:22.056 EGE Aggregate Log Change Notices: Not Supported 00:09:22.056 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.056 Zone Descriptor Change Notices: Not Supported 00:09:22.056 Discovery Log Change Notices: Not Supported 00:09:22.056 Controller Attributes 00:09:22.056 128-bit Host Identifier: Not Supported 00:09:22.056 Non-Operational Permissive Mode: Not Supported 00:09:22.056 NVM Sets: Not Supported 00:09:22.056 Read Recovery Levels: Not Supported 00:09:22.056 Endurance Groups: Not Supported 00:09:22.056 Predictable Latency Mode: Not Supported 00:09:22.056 Traffic Based Keep ALive: Not Supported 00:09:22.056 Namespace Granularity: Not Supported 00:09:22.056 SQ Associations: Not Supported 00:09:22.056 UUID List: Not Supported 00:09:22.056 Multi-Domain Subsystem: Not Supported 00:09:22.056 Fixed Capacity Management: Not Supported 00:09:22.056 Variable Capacity Management: Not Supported 00:09:22.056 Delete Endurance Group: Not Supported 00:09:22.056 Delete NVM Set: Not Supported 00:09:22.056 Extended LBA Formats Supported: Supported 00:09:22.056 Flexible Data Placement Supported: Not Supported 00:09:22.056 00:09:22.056 Controller Memory Buffer Support 00:09:22.056 ================================ 00:09:22.056 Supported: No 00:09:22.056 00:09:22.056 Persistent Memory Region Support 00:09:22.056 ================================ 00:09:22.056 Supported: No 00:09:22.056 00:09:22.056 Admin Command Set Attributes 00:09:22.056 ============================ 00:09:22.056 Security Send/Receive: Not Supported 00:09:22.056 Format NVM: Supported 00:09:22.056 Firmware Activate/Download: Not Supported 00:09:22.056 Namespace Management: Supported 00:09:22.056 Device Self-Test: Not Supported 00:09:22.056 Directives: Supported 00:09:22.056 NVMe-MI: Not Supported 00:09:22.056 Virtualization Management: Not Supported 00:09:22.056 Doorbell Buffer Config: Supported 00:09:22.056 Get LBA Status Capability: Not Supported 00:09:22.056 Command & Feature Lockdown Capability: Not Supported 00:09:22.056 Abort Command Limit: 4 00:09:22.056 Async Event Request Limit: 4 00:09:22.056 Number of Firmware Slots: N/A 00:09:22.056 Firmware Slot 1 Read-Only: N/A 00:09:22.056 Firmware Activation Without Reset: N/A 00:09:22.056 Multiple Update Detection Support: N/A 00:09:22.056 Firmware Update Granularity: No Information Provided 00:09:22.056 Per-Namespace SMART Log: Yes 00:09:22.056 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.056 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:22.056 Command Effects Log Page: Supported 00:09:22.056 Get Log Page Extended Data: Supported 00:09:22.056 Telemetry Log Pages: Not Supported 00:09:22.056 Persistent Event Log Pages: Not Supported 00:09:22.056 Supported Log Pages Log Page: May Support 00:09:22.056 Commands Supported & Effects Log Page: Not Supported 00:09:22.056 Feature Identifiers & Effects Log Page:May Support 00:09:22.056 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.056 Data Area 4 for Telemetry Log: Not Supported 00:09:22.056 Error Log Page Entries Supported: 1 00:09:22.056 Keep Alive: Not Supported 00:09:22.056 00:09:22.056 NVM Command Set Attributes 00:09:22.056 ========================== 00:09:22.056 Submission Queue Entry Size 00:09:22.056 Max: 64 00:09:22.056 Min: 64 00:09:22.056 Completion Queue Entry Size 00:09:22.056 Max: 16 00:09:22.056 Min: 16 00:09:22.056 Number of Namespaces: 256 00:09:22.056 Compare Command: Supported 00:09:22.056 Write Uncorrectable Command: Not Supported 00:09:22.056 Dataset Management Command: Supported 00:09:22.056 Write Zeroes Command: Supported 00:09:22.056 Set Features Save Field: Supported 00:09:22.056 Reservations: Not Supported 00:09:22.056 Timestamp: Supported 00:09:22.056 Copy: Supported 00:09:22.056 Volatile Write Cache: Present 00:09:22.056 Atomic Write Unit (Normal): 1 00:09:22.056 Atomic Write Unit (PFail): 1 00:09:22.056 Atomic Compare & Write Unit: 1 00:09:22.056 Fused Compare & Write: Not Supported 00:09:22.056 Scatter-Gather List 00:09:22.056 SGL Command Set: Supported 00:09:22.056 SGL Keyed: Not Supported 00:09:22.056 SGL Bit Bucket Descriptor: Not Supported 00:09:22.056 SGL Metadata Pointer: Not Supported 00:09:22.056 Oversized SGL: Not Supported 00:09:22.056 SGL Metadata Address: Not Supported 00:09:22.056 SGL Offset: Not Supported 00:09:22.056 Transport SGL Data Block: Not Supported 00:09:22.056 Replay Protected Memory Block: Not Supported 00:09:22.056 00:09:22.056 Firmware Slot Information 00:09:22.056 ========================= 00:09:22.056 Active slot: 1 00:09:22.056 Slot 1 Firmware Revision: 1.0 00:09:22.056 00:09:22.056 00:09:22.056 Commands Supported and Effects 00:09:22.056 ============================== 00:09:22.056 Admin Commands 00:09:22.056 -------------- 00:09:22.056 Delete I/O Submission Queue (00h): Supported 00:09:22.056 Create I/O Submission Queue (01h): Supported 00:09:22.056 Get Log Page (02h): Supported 00:09:22.056 Delete I/O Completion Queue (04h): Supported 00:09:22.056 Create I/O Completion Queue (05h): Supported 00:09:22.056 Identify (06h): Supported 00:09:22.056 Abort (08h): Supported 00:09:22.056 Set Features (09h): Supported 00:09:22.056 Get Features (0Ah): Supported 00:09:22.056 Asynchronous Event Request (0Ch): Supported 00:09:22.056 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.056 Directive Send (19h): Supported 00:09:22.056 Directive Receive (1Ah): Supported 00:09:22.056 Virtualization Management (1Ch): Supported 00:09:22.056 Doorbell Buffer Config (7Ch): Supported 00:09:22.056 Format NVM (80h): Supported LBA-Change 00:09:22.056 I/O Commands 00:09:22.056 ------------ 00:09:22.056 Flush (00h): Supported LBA-Change 00:09:22.056 Write (01h): Supported LBA-Change 00:09:22.056 Read (02h): Supported 00:09:22.056 Compare (05h): Supported 00:09:22.056 Write Zeroes (08h): Supported LBA-Change 00:09:22.056 Dataset Management (09h): Supported LBA-Change 00:09:22.056 Unknown (0Ch): Supported 00:09:22.056 Unknown (12h): Supported 00:09:22.056 Copy (19h): Supported LBA-Change 00:09:22.056 Unknown (1Dh): Supported LBA-Change 00:09:22.056 00:09:22.056 Error Log 00:09:22.056 ========= 00:09:22.056 00:09:22.056 Arbitration 00:09:22.056 =========== 00:09:22.056 Arbitration Burst: no limit 00:09:22.056 00:09:22.056 Power Management 00:09:22.056 ================ 00:09:22.056 Number of Power States: 1 00:09:22.056 Current Power State: Power State #0 00:09:22.056 Power State #0: 00:09:22.056 Max Power: 25.00 W 00:09:22.056 Non-Operational State: Operational 00:09:22.056 Entry Latency: 16 microseconds 00:09:22.056 Exit Latency: 4 microseconds 00:09:22.056 Relative Read Throughput: 0 00:09:22.056 Relative Read Latency: 0 00:09:22.056 Relative Write Throughput: 0 00:09:22.056 Relative Write Latency: 0 00:09:22.056 Idle Power: Not Reported 00:09:22.056 Active Power: Not Reported 00:09:22.056 Non-Operational Permissive Mode: Not Supported 00:09:22.056 00:09:22.056 Health Information 00:09:22.056 ================== 00:09:22.056 Critical Warnings: 00:09:22.056 Available Spare Space: OK 00:09:22.056 Temperature: OK 00:09:22.056 Device Reliability: OK 00:09:22.056 Read Only: No 00:09:22.056 Volatile Memory Backup: OK 00:09:22.056 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.056 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.056 Available Spare: 0% 00:09:22.056 Available Spare Threshold: 0% 00:09:22.056 Life Percentage Used: 0% 00:09:22.056 Data Units Read: 752 00:09:22.056 Data Units Written: 603 00:09:22.056 Host Read Commands: 34765 00:09:22.056 Host Write Commands: 32543 00:09:22.056 Controller Busy Time: 0 minutes 00:09:22.056 Power Cycles: 0 00:09:22.056 Power On Hours: 0 hours 00:09:22.056 Unsafe Shutdowns: 0 00:09:22.056 Unrecoverable Media Errors: 0 00:09:22.056 Lifetime Error Log Entries: 0 00:09:22.056 Warning Temperature Time: 0 minutes 00:09:22.056 Critical Temperature Time: 0 minutes 00:09:22.056 00:09:22.056 Number of Queues 00:09:22.056 ================ 00:09:22.056 Number of I/O Submission Queues: 64 00:09:22.056 Number of I/O Completion Queues: 64 00:09:22.056 00:09:22.056 ZNS Specific Controller Data 00:09:22.056 ============================ 00:09:22.056 Zone Append Size Limit: 0 00:09:22.056 00:09:22.056 00:09:22.056 Active Namespaces 00:09:22.056 ================= 00:09:22.056 Namespace ID:1 00:09:22.056 Error Recovery Timeout: Unlimited 00:09:22.056 Command Set Identifier: NVM (00h) 00:09:22.056 Deallocate: Supported 00:09:22.056 Deallocated/Unwritten Error: Supported 00:09:22.056 Deallocated Read Value: All 0x00 00:09:22.056 Deallocate in Write Zeroes: Not Supported 00:09:22.056 Deallocated Guard Field: 0xFFFF 00:09:22.056 Flush: Supported 00:09:22.056 Reservation: Not Supported 00:09:22.056 Namespace Sharing Capabilities: Private 00:09:22.056 Size (in LBAs): 1310720 (5GiB) 00:09:22.056 Capacity (in LBAs): 1310720 (5GiB) 00:09:22.056 Utilization (in LBAs): 1310720 (5GiB) 00:09:22.056 Thin Provisioning: Not Supported 00:09:22.057 Per-NS Atomic Units: No 00:09:22.057 Maximum Single Source Range Length: 128 00:09:22.057 Maximum Copy Length: 128 00:09:22.057 Maximum Source Range Count: 128 00:09:22.057 NGUID/EUI64 Never Reused: No 00:09:22.057 Namespace Write Protected: No 00:09:22.057 Number of LBA Formats: 8 00:09:22.057 Current LBA Format: LBA Format #04 00:09:22.057 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.057 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.057 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.057 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.057 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.057 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.057 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.057 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.057 00:09:22.057 02:56:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.057 02:56:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:22.316 ===================================================== 00:09:22.316 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.316 ===================================================== 00:09:22.316 Controller Capabilities/Features 00:09:22.316 ================================ 00:09:22.316 Vendor ID: 1b36 00:09:22.316 Subsystem Vendor ID: 1af4 00:09:22.316 Serial Number: 12342 00:09:22.316 Model Number: QEMU NVMe Ctrl 00:09:22.316 Firmware Version: 8.0.0 00:09:22.316 Recommended Arb Burst: 6 00:09:22.316 IEEE OUI Identifier: 00 54 52 00:09:22.316 Multi-path I/O 00:09:22.316 May have multiple subsystem ports: No 00:09:22.316 May have multiple controllers: No 00:09:22.316 Associated with SR-IOV VF: No 00:09:22.316 Max Data Transfer Size: 524288 00:09:22.316 Max Number of Namespaces: 256 00:09:22.316 Max Number of I/O Queues: 64 00:09:22.316 NVMe Specification Version (VS): 1.4 00:09:22.316 NVMe Specification Version (Identify): 1.4 00:09:22.316 Maximum Queue Entries: 2048 00:09:22.316 Contiguous Queues Required: Yes 00:09:22.316 Arbitration Mechanisms Supported 00:09:22.316 Weighted Round Robin: Not Supported 00:09:22.316 Vendor Specific: Not Supported 00:09:22.316 Reset Timeout: 7500 ms 00:09:22.316 Doorbell Stride: 4 bytes 00:09:22.316 NVM Subsystem Reset: Not Supported 00:09:22.316 Command Sets Supported 00:09:22.316 NVM Command Set: Supported 00:09:22.316 Boot Partition: Not Supported 00:09:22.316 Memory Page Size Minimum: 4096 bytes 00:09:22.316 Memory Page Size Maximum: 65536 bytes 00:09:22.316 Persistent Memory Region: Not Supported 00:09:22.316 Optional Asynchronous Events Supported 00:09:22.316 Namespace Attribute Notices: Supported 00:09:22.316 Firmware Activation Notices: Not Supported 00:09:22.316 ANA Change Notices: Not Supported 00:09:22.316 PLE Aggregate Log Change Notices: Not Supported 00:09:22.316 LBA Status Info Alert Notices: Not Supported 00:09:22.316 EGE Aggregate Log Change Notices: Not Supported 00:09:22.316 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.316 Zone Descriptor Change Notices: Not Supported 00:09:22.316 Discovery Log Change Notices: Not Supported 00:09:22.316 Controller Attributes 00:09:22.316 128-bit Host Identifier: Not Supported 00:09:22.316 Non-Operational Permissive Mode: Not Supported 00:09:22.316 NVM Sets: Not Supported 00:09:22.316 Read Recovery Levels: Not Supported 00:09:22.316 Endurance Groups: Not Supported 00:09:22.316 Predictable Latency Mode: Not Supported 00:09:22.316 Traffic Based Keep ALive: Not Supported 00:09:22.316 Namespace Granularity: Not Supported 00:09:22.316 SQ Associations: Not Supported 00:09:22.316 UUID List: Not Supported 00:09:22.316 Multi-Domain Subsystem: Not Supported 00:09:22.316 Fixed Capacity Management: Not Supported 00:09:22.316 Variable Capacity Management: Not Supported 00:09:22.316 Delete Endurance Group: Not Supported 00:09:22.316 Delete NVM Set: Not Supported 00:09:22.316 Extended LBA Formats Supported: Supported 00:09:22.316 Flexible Data Placement Supported: Not Supported 00:09:22.316 00:09:22.316 Controller Memory Buffer Support 00:09:22.316 ================================ 00:09:22.316 Supported: No 00:09:22.316 00:09:22.316 Persistent Memory Region Support 00:09:22.316 ================================ 00:09:22.316 Supported: No 00:09:22.316 00:09:22.316 Admin Command Set Attributes 00:09:22.316 ============================ 00:09:22.316 Security Send/Receive: Not Supported 00:09:22.316 Format NVM: Supported 00:09:22.316 Firmware Activate/Download: Not Supported 00:09:22.316 Namespace Management: Supported 00:09:22.316 Device Self-Test: Not Supported 00:09:22.316 Directives: Supported 00:09:22.316 NVMe-MI: Not Supported 00:09:22.316 Virtualization Management: Not Supported 00:09:22.316 Doorbell Buffer Config: Supported 00:09:22.316 Get LBA Status Capability: Not Supported 00:09:22.316 Command & Feature Lockdown Capability: Not Supported 00:09:22.316 Abort Command Limit: 4 00:09:22.316 Async Event Request Limit: 4 00:09:22.316 Number of Firmware Slots: N/A 00:09:22.316 Firmware Slot 1 Read-Only: N/A 00:09:22.316 Firmware Activation Without Reset: N/A 00:09:22.316 Multiple Update Detection Support: N/A 00:09:22.316 Firmware Update Granularity: No Information Provided 00:09:22.316 Per-Namespace SMART Log: Yes 00:09:22.316 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.316 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:22.316 Command Effects Log Page: Supported 00:09:22.316 Get Log Page Extended Data: Supported 00:09:22.316 Telemetry Log Pages: Not Supported 00:09:22.316 Persistent Event Log Pages: Not Supported 00:09:22.316 Supported Log Pages Log Page: May Support 00:09:22.316 Commands Supported & Effects Log Page: Not Supported 00:09:22.316 Feature Identifiers & Effects Log Page:May Support 00:09:22.316 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.316 Data Area 4 for Telemetry Log: Not Supported 00:09:22.316 Error Log Page Entries Supported: 1 00:09:22.316 Keep Alive: Not Supported 00:09:22.316 00:09:22.316 NVM Command Set Attributes 00:09:22.316 ========================== 00:09:22.316 Submission Queue Entry Size 00:09:22.316 Max: 64 00:09:22.316 Min: 64 00:09:22.316 Completion Queue Entry Size 00:09:22.316 Max: 16 00:09:22.316 Min: 16 00:09:22.316 Number of Namespaces: 256 00:09:22.316 Compare Command: Supported 00:09:22.316 Write Uncorrectable Command: Not Supported 00:09:22.316 Dataset Management Command: Supported 00:09:22.316 Write Zeroes Command: Supported 00:09:22.316 Set Features Save Field: Supported 00:09:22.316 Reservations: Not Supported 00:09:22.316 Timestamp: Supported 00:09:22.316 Copy: Supported 00:09:22.316 Volatile Write Cache: Present 00:09:22.316 Atomic Write Unit (Normal): 1 00:09:22.316 Atomic Write Unit (PFail): 1 00:09:22.317 Atomic Compare & Write Unit: 1 00:09:22.317 Fused Compare & Write: Not Supported 00:09:22.317 Scatter-Gather List 00:09:22.317 SGL Command Set: Supported 00:09:22.317 SGL Keyed: Not Supported 00:09:22.317 SGL Bit Bucket Descriptor: Not Supported 00:09:22.317 SGL Metadata Pointer: Not Supported 00:09:22.317 Oversized SGL: Not Supported 00:09:22.317 SGL Metadata Address: Not Supported 00:09:22.317 SGL Offset: Not Supported 00:09:22.317 Transport SGL Data Block: Not Supported 00:09:22.317 Replay Protected Memory Block: Not Supported 00:09:22.317 00:09:22.317 Firmware Slot Information 00:09:22.317 ========================= 00:09:22.317 Active slot: 1 00:09:22.317 Slot 1 Firmware Revision: 1.0 00:09:22.317 00:09:22.317 00:09:22.317 Commands Supported and Effects 00:09:22.317 ============================== 00:09:22.317 Admin Commands 00:09:22.317 -------------- 00:09:22.317 Delete I/O Submission Queue (00h): Supported 00:09:22.317 Create I/O Submission Queue (01h): Supported 00:09:22.317 Get Log Page (02h): Supported 00:09:22.317 Delete I/O Completion Queue (04h): Supported 00:09:22.317 Create I/O Completion Queue (05h): Supported 00:09:22.317 Identify (06h): Supported 00:09:22.317 Abort (08h): Supported 00:09:22.317 Set Features (09h): Supported 00:09:22.317 Get Features (0Ah): Supported 00:09:22.317 Asynchronous Event Request (0Ch): Supported 00:09:22.317 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.317 Directive Send (19h): Supported 00:09:22.317 Directive Receive (1Ah): Supported 00:09:22.317 Virtualization Management (1Ch): Supported 00:09:22.317 Doorbell Buffer Config (7Ch): Supported 00:09:22.317 Format NVM (80h): Supported LBA-Change 00:09:22.317 I/O Commands 00:09:22.317 ------------ 00:09:22.317 Flush (00h): Supported LBA-Change 00:09:22.317 Write (01h): Supported LBA-Change 00:09:22.317 Read (02h): Supported 00:09:22.317 Compare (05h): Supported 00:09:22.317 Write Zeroes (08h): Supported LBA-Change 00:09:22.317 Dataset Management (09h): Supported LBA-Change 00:09:22.317 Unknown (0Ch): Supported 00:09:22.317 Unknown (12h): Supported 00:09:22.317 Copy (19h): Supported LBA-Change 00:09:22.317 Unknown (1Dh): Supported LBA-Change 00:09:22.317 00:09:22.317 Error Log 00:09:22.317 ========= 00:09:22.317 00:09:22.317 Arbitration 00:09:22.317 =========== 00:09:22.317 Arbitration Burst: no limit 00:09:22.317 00:09:22.317 Power Management 00:09:22.317 ================ 00:09:22.317 Number of Power States: 1 00:09:22.317 Current Power State: Power State #0 00:09:22.317 Power State #0: 00:09:22.317 Max Power: 25.00 W 00:09:22.317 Non-Operational State: Operational 00:09:22.317 Entry Latency: 16 microseconds 00:09:22.317 Exit Latency: 4 microseconds 00:09:22.317 Relative Read Throughput: 0 00:09:22.317 Relative Read Latency: 0 00:09:22.317 Relative Write Throughput: 0 00:09:22.317 Relative Write Latency: 0 00:09:22.317 Idle Power: Not Reported 00:09:22.317 Active Power: Not Reported 00:09:22.317 Non-Operational Permissive Mode: Not Supported 00:09:22.317 00:09:22.317 Health Information 00:09:22.317 ================== 00:09:22.317 Critical Warnings: 00:09:22.317 Available Spare Space: OK 00:09:22.317 Temperature: OK 00:09:22.317 Device Reliability: OK 00:09:22.317 Read Only: No 00:09:22.317 Volatile Memory Backup: OK 00:09:22.317 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.317 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.317 Available Spare: 0% 00:09:22.317 Available Spare Threshold: 0% 00:09:22.317 Life Percentage Used: 0% 00:09:22.317 Data Units Read: 2186 00:09:22.317 Data Units Written: 1866 00:09:22.317 Host Read Commands: 102142 00:09:22.317 Host Write Commands: 97912 00:09:22.317 Controller Busy Time: 0 minutes 00:09:22.317 Power Cycles: 0 00:09:22.317 Power On Hours: 0 hours 00:09:22.317 Unsafe Shutdowns: 0 00:09:22.317 Unrecoverable Media Errors: 0 00:09:22.317 Lifetime Error Log Entries: 0 00:09:22.317 Warning Temperature Time: 0 minutes 00:09:22.317 Critical Temperature Time: 0 minutes 00:09:22.317 00:09:22.317 Number of Queues 00:09:22.317 ================ 00:09:22.317 Number of I/O Submission Queues: 64 00:09:22.317 Number of I/O Completion Queues: 64 00:09:22.317 00:09:22.317 ZNS Specific Controller Data 00:09:22.317 ============================ 00:09:22.317 Zone Append Size Limit: 0 00:09:22.317 00:09:22.317 00:09:22.317 Active Namespaces 00:09:22.317 ================= 00:09:22.317 Namespace ID:1 00:09:22.317 Error Recovery Timeout: Unlimited 00:09:22.317 Command Set Identifier: NVM (00h) 00:09:22.317 Deallocate: Supported 00:09:22.317 Deallocated/Unwritten Error: Supported 00:09:22.317 Deallocated Read Value: All 0x00 00:09:22.317 Deallocate in Write Zeroes: Not Supported 00:09:22.317 Deallocated Guard Field: 0xFFFF 00:09:22.317 Flush: Supported 00:09:22.317 Reservation: Not Supported 00:09:22.317 Namespace Sharing Capabilities: Private 00:09:22.317 Size (in LBAs): 1048576 (4GiB) 00:09:22.317 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.317 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.317 Thin Provisioning: Not Supported 00:09:22.317 Per-NS Atomic Units: No 00:09:22.317 Maximum Single Source Range Length: 128 00:09:22.317 Maximum Copy Length: 128 00:09:22.317 Maximum Source Range Count: 128 00:09:22.317 NGUID/EUI64 Never Reused: No 00:09:22.317 Namespace Write Protected: No 00:09:22.317 Number of LBA Formats: 8 00:09:22.317 Current LBA Format: LBA Format #04 00:09:22.317 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.317 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.317 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.317 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.317 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.317 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.317 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.317 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.317 00:09:22.317 Namespace ID:2 00:09:22.317 Error Recovery Timeout: Unlimited 00:09:22.317 Command Set Identifier: NVM (00h) 00:09:22.317 Deallocate: Supported 00:09:22.317 Deallocated/Unwritten Error: Supported 00:09:22.317 Deallocated Read Value: All 0x00 00:09:22.317 Deallocate in Write Zeroes: Not Supported 00:09:22.317 Deallocated Guard Field: 0xFFFF 00:09:22.317 Flush: Supported 00:09:22.317 Reservation: Not Supported 00:09:22.317 Namespace Sharing Capabilities: Private 00:09:22.317 Size (in LBAs): 1048576 (4GiB) 00:09:22.317 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.317 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.317 Thin Provisioning: Not Supported 00:09:22.317 Per-NS Atomic Units: No 00:09:22.317 Maximum Single Source Range Length: 128 00:09:22.317 Maximum Copy Length: 128 00:09:22.317 Maximum Source Range Count: 128 00:09:22.317 NGUID/EUI64 Never Reused: No 00:09:22.317 Namespace Write Protected: No 00:09:22.317 Number of LBA Formats: 8 00:09:22.317 Current LBA Format: LBA Format #04 00:09:22.317 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.317 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.317 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.317 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.317 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.317 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.317 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.317 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.317 00:09:22.317 Namespace ID:3 00:09:22.317 Error Recovery Timeout: Unlimited 00:09:22.317 Command Set Identifier: NVM (00h) 00:09:22.317 Deallocate: Supported 00:09:22.317 Deallocated/Unwritten Error: Supported 00:09:22.317 Deallocated Read Value: All 0x00 00:09:22.317 Deallocate in Write Zeroes: Not Supported 00:09:22.317 Deallocated Guard Field: 0xFFFF 00:09:22.317 Flush: Supported 00:09:22.317 Reservation: Not Supported 00:09:22.317 Namespace Sharing Capabilities: Private 00:09:22.317 Size (in LBAs): 1048576 (4GiB) 00:09:22.317 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.317 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.317 Thin Provisioning: Not Supported 00:09:22.317 Per-NS Atomic Units: No 00:09:22.317 Maximum Single Source Range Length: 128 00:09:22.317 Maximum Copy Length: 128 00:09:22.317 Maximum Source Range Count: 128 00:09:22.317 NGUID/EUI64 Never Reused: No 00:09:22.317 Namespace Write Protected: No 00:09:22.317 Number of LBA Formats: 8 00:09:22.317 Current LBA Format: LBA Format #04 00:09:22.317 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.317 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.317 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.317 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.317 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.317 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.317 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.317 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.317 00:09:22.317 02:56:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.318 02:56:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:22.577 ===================================================== 00:09:22.577 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.577 ===================================================== 00:09:22.577 Controller Capabilities/Features 00:09:22.577 ================================ 00:09:22.577 Vendor ID: 1b36 00:09:22.577 Subsystem Vendor ID: 1af4 00:09:22.577 Serial Number: 12343 00:09:22.577 Model Number: QEMU NVMe Ctrl 00:09:22.577 Firmware Version: 8.0.0 00:09:22.577 Recommended Arb Burst: 6 00:09:22.577 IEEE OUI Identifier: 00 54 52 00:09:22.577 Multi-path I/O 00:09:22.577 May have multiple subsystem ports: No 00:09:22.577 May have multiple controllers: Yes 00:09:22.577 Associated with SR-IOV VF: No 00:09:22.577 Max Data Transfer Size: 524288 00:09:22.577 Max Number of Namespaces: 256 00:09:22.577 Max Number of I/O Queues: 64 00:09:22.577 NVMe Specification Version (VS): 1.4 00:09:22.577 NVMe Specification Version (Identify): 1.4 00:09:22.577 Maximum Queue Entries: 2048 00:09:22.577 Contiguous Queues Required: Yes 00:09:22.577 Arbitration Mechanisms Supported 00:09:22.577 Weighted Round Robin: Not Supported 00:09:22.577 Vendor Specific: Not Supported 00:09:22.577 Reset Timeout: 7500 ms 00:09:22.577 Doorbell Stride: 4 bytes 00:09:22.577 NVM Subsystem Reset: Not Supported 00:09:22.577 Command Sets Supported 00:09:22.577 NVM Command Set: Supported 00:09:22.577 Boot Partition: Not Supported 00:09:22.577 Memory Page Size Minimum: 4096 bytes 00:09:22.577 Memory Page Size Maximum: 65536 bytes 00:09:22.577 Persistent Memory Region: Not Supported 00:09:22.577 Optional Asynchronous Events Supported 00:09:22.577 Namespace Attribute Notices: Supported 00:09:22.577 Firmware Activation Notices: Not Supported 00:09:22.577 ANA Change Notices: Not Supported 00:09:22.577 PLE Aggregate Log Change Notices: Not Supported 00:09:22.577 LBA Status Info Alert Notices: Not Supported 00:09:22.577 EGE Aggregate Log Change Notices: Not Supported 00:09:22.577 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.577 Zone Descriptor Change Notices: Not Supported 00:09:22.577 Discovery Log Change Notices: Not Supported 00:09:22.577 Controller Attributes 00:09:22.577 128-bit Host Identifier: Not Supported 00:09:22.577 Non-Operational Permissive Mode: Not Supported 00:09:22.577 NVM Sets: Not Supported 00:09:22.577 Read Recovery Levels: Not Supported 00:09:22.577 Endurance Groups: Supported 00:09:22.577 Predictable Latency Mode: Not Supported 00:09:22.577 Traffic Based Keep ALive: Not Supported 00:09:22.577 Namespace Granularity: Not Supported 00:09:22.577 SQ Associations: Not Supported 00:09:22.577 UUID List: Not Supported 00:09:22.577 Multi-Domain Subsystem: Not Supported 00:09:22.577 Fixed Capacity Management: Not Supported 00:09:22.577 Variable Capacity Management: Not Supported 00:09:22.577 Delete Endurance Group: Not Supported 00:09:22.577 Delete NVM Set: Not Supported 00:09:22.577 Extended LBA Formats Supported: Supported 00:09:22.577 Flexible Data Placement Supported: Supported 00:09:22.577 00:09:22.577 Controller Memory Buffer Support 00:09:22.577 ================================ 00:09:22.577 Supported: No 00:09:22.577 00:09:22.577 Persistent Memory Region Support 00:09:22.577 ================================ 00:09:22.577 Supported: No 00:09:22.577 00:09:22.577 Admin Command Set Attributes 00:09:22.577 ============================ 00:09:22.577 Security Send/Receive: Not Supported 00:09:22.577 Format NVM: Supported 00:09:22.577 Firmware Activate/Download: Not Supported 00:09:22.577 Namespace Management: Supported 00:09:22.577 Device Self-Test: Not Supported 00:09:22.577 Directives: Supported 00:09:22.577 NVMe-MI: Not Supported 00:09:22.577 Virtualization Management: Not Supported 00:09:22.577 Doorbell Buffer Config: Supported 00:09:22.577 Get LBA Status Capability: Not Supported 00:09:22.577 Command & Feature Lockdown Capability: Not Supported 00:09:22.577 Abort Command Limit: 4 00:09:22.577 Async Event Request Limit: 4 00:09:22.577 Number of Firmware Slots: N/A 00:09:22.577 Firmware Slot 1 Read-Only: N/A 00:09:22.577 Firmware Activation Without Reset: N/A 00:09:22.577 Multiple Update Detection Support: N/A 00:09:22.577 Firmware Update Granularity: No Information Provided 00:09:22.577 Per-Namespace SMART Log: Yes 00:09:22.577 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.577 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:22.577 Command Effects Log Page: Supported 00:09:22.577 Get Log Page Extended Data: Supported 00:09:22.577 Telemetry Log Pages: Not Supported 00:09:22.577 Persistent Event Log Pages: Not Supported 00:09:22.577 Supported Log Pages Log Page: May Support 00:09:22.577 Commands Supported & Effects Log Page: Not Supported 00:09:22.577 Feature Identifiers & Effects Log Page:May Support 00:09:22.577 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.577 Data Area 4 for Telemetry Log: Not Supported 00:09:22.577 Error Log Page Entries Supported: 1 00:09:22.577 Keep Alive: Not Supported 00:09:22.577 00:09:22.577 NVM Command Set Attributes 00:09:22.577 ========================== 00:09:22.577 Submission Queue Entry Size 00:09:22.577 Max: 64 00:09:22.577 Min: 64 00:09:22.577 Completion Queue Entry Size 00:09:22.577 Max: 16 00:09:22.577 Min: 16 00:09:22.577 Number of Namespaces: 256 00:09:22.577 Compare Command: Supported 00:09:22.577 Write Uncorrectable Command: Not Supported 00:09:22.577 Dataset Management Command: Supported 00:09:22.577 Write Zeroes Command: Supported 00:09:22.577 Set Features Save Field: Supported 00:09:22.577 Reservations: Not Supported 00:09:22.578 Timestamp: Supported 00:09:22.578 Copy: Supported 00:09:22.578 Volatile Write Cache: Present 00:09:22.578 Atomic Write Unit (Normal): 1 00:09:22.578 Atomic Write Unit (PFail): 1 00:09:22.578 Atomic Compare & Write Unit: 1 00:09:22.578 Fused Compare & Write: Not Supported 00:09:22.578 Scatter-Gather List 00:09:22.578 SGL Command Set: Supported 00:09:22.578 SGL Keyed: Not Supported 00:09:22.578 SGL Bit Bucket Descriptor: Not Supported 00:09:22.578 SGL Metadata Pointer: Not Supported 00:09:22.578 Oversized SGL: Not Supported 00:09:22.578 SGL Metadata Address: Not Supported 00:09:22.578 SGL Offset: Not Supported 00:09:22.578 Transport SGL Data Block: Not Supported 00:09:22.578 Replay Protected Memory Block: Not Supported 00:09:22.578 00:09:22.578 Firmware Slot Information 00:09:22.578 ========================= 00:09:22.578 Active slot: 1 00:09:22.578 Slot 1 Firmware Revision: 1.0 00:09:22.578 00:09:22.578 00:09:22.578 Commands Supported and Effects 00:09:22.578 ============================== 00:09:22.578 Admin Commands 00:09:22.578 -------------- 00:09:22.578 Delete I/O Submission Queue (00h): Supported 00:09:22.578 Create I/O Submission Queue (01h): Supported 00:09:22.578 Get Log Page (02h): Supported 00:09:22.578 Delete I/O Completion Queue (04h): Supported 00:09:22.578 Create I/O Completion Queue (05h): Supported 00:09:22.578 Identify (06h): Supported 00:09:22.578 Abort (08h): Supported 00:09:22.578 Set Features (09h): Supported 00:09:22.578 Get Features (0Ah): Supported 00:09:22.578 Asynchronous Event Request (0Ch): Supported 00:09:22.578 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.578 Directive Send (19h): Supported 00:09:22.578 Directive Receive (1Ah): Supported 00:09:22.578 Virtualization Management (1Ch): Supported 00:09:22.578 Doorbell Buffer Config (7Ch): Supported 00:09:22.578 Format NVM (80h): Supported LBA-Change 00:09:22.578 I/O Commands 00:09:22.578 ------------ 00:09:22.578 Flush (00h): Supported LBA-Change 00:09:22.578 Write (01h): Supported LBA-Change 00:09:22.578 Read (02h): Supported 00:09:22.578 Compare (05h): Supported 00:09:22.578 Write Zeroes (08h): Supported LBA-Change 00:09:22.578 Dataset Management (09h): Supported LBA-Change 00:09:22.578 Unknown (0Ch): Supported 00:09:22.578 Unknown (12h): Supported 00:09:22.578 Copy (19h): Supported LBA-Change 00:09:22.578 Unknown (1Dh): Supported LBA-Change 00:09:22.578 00:09:22.578 Error Log 00:09:22.578 ========= 00:09:22.578 00:09:22.578 Arbitration 00:09:22.578 =========== 00:09:22.578 Arbitration Burst: no limit 00:09:22.578 00:09:22.578 Power Management 00:09:22.578 ================ 00:09:22.578 Number of Power States: 1 00:09:22.578 Current Power State: Power State #0 00:09:22.578 Power State #0: 00:09:22.578 Max Power: 25.00 W 00:09:22.578 Non-Operational State: Operational 00:09:22.578 Entry Latency: 16 microseconds 00:09:22.578 Exit Latency: 4 microseconds 00:09:22.578 Relative Read Throughput: 0 00:09:22.578 Relative Read Latency: 0 00:09:22.578 Relative Write Throughput: 0 00:09:22.578 Relative Write Latency: 0 00:09:22.578 Idle Power: Not Reported 00:09:22.578 Active Power: Not Reported 00:09:22.578 Non-Operational Permissive Mode: Not Supported 00:09:22.578 00:09:22.578 Health Information 00:09:22.578 ================== 00:09:22.578 Critical Warnings: 00:09:22.578 Available Spare Space: OK 00:09:22.578 Temperature: OK 00:09:22.578 Device Reliability: OK 00:09:22.578 Read Only: No 00:09:22.578 Volatile Memory Backup: OK 00:09:22.578 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.578 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.578 Available Spare: 0% 00:09:22.578 Available Spare Threshold: 0% 00:09:22.578 Life Percentage Used: 0% 00:09:22.578 Data Units Read: 788 00:09:22.578 Data Units Written: 681 00:09:22.578 Host Read Commands: 34508 00:09:22.578 Host Write Commands: 33098 00:09:22.578 Controller Busy Time: 0 minutes 00:09:22.578 Power Cycles: 0 00:09:22.578 Power On Hours: 0 hours 00:09:22.578 Unsafe Shutdowns: 0 00:09:22.578 Unrecoverable Media Errors: 0 00:09:22.578 Lifetime Error Log Entries: 0 00:09:22.578 Warning Temperature Time: 0 minutes 00:09:22.578 Critical Temperature Time: 0 minutes 00:09:22.578 00:09:22.578 Number of Queues 00:09:22.578 ================ 00:09:22.578 Number of I/O Submission Queues: 64 00:09:22.578 Number of I/O Completion Queues: 64 00:09:22.578 00:09:22.578 ZNS Specific Controller Data 00:09:22.578 ============================ 00:09:22.578 Zone Append Size Limit: 0 00:09:22.578 00:09:22.578 00:09:22.578 Active Namespaces 00:09:22.578 ================= 00:09:22.578 Namespace ID:1 00:09:22.578 Error Recovery Timeout: Unlimited 00:09:22.578 Command Set Identifier: NVM (00h) 00:09:22.578 Deallocate: Supported 00:09:22.578 Deallocated/Unwritten Error: Supported 00:09:22.578 Deallocated Read Value: All 0x00 00:09:22.578 Deallocate in Write Zeroes: Not Supported 00:09:22.578 Deallocated Guard Field: 0xFFFF 00:09:22.578 Flush: Supported 00:09:22.578 Reservation: Not Supported 00:09:22.578 Namespace Sharing Capabilities: Multiple Controllers 00:09:22.578 Size (in LBAs): 262144 (1GiB) 00:09:22.578 Capacity (in LBAs): 262144 (1GiB) 00:09:22.578 Utilization (in LBAs): 262144 (1GiB) 00:09:22.578 Thin Provisioning: Not Supported 00:09:22.578 Per-NS Atomic Units: No 00:09:22.578 Maximum Single Source Range Length: 128 00:09:22.578 Maximum Copy Length: 128 00:09:22.578 Maximum Source Range Count: 128 00:09:22.578 NGUID/EUI64 Never Reused: No 00:09:22.578 Namespace Write Protected: No 00:09:22.578 Endurance group ID: 1 00:09:22.578 Number of LBA Formats: 8 00:09:22.578 Current LBA Format: LBA Format #04 00:09:22.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.578 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.578 00:09:22.578 Get Feature FDP: 00:09:22.578 ================ 00:09:22.578 Enabled: Yes 00:09:22.578 FDP configuration index: 0 00:09:22.578 00:09:22.578 FDP configurations log page 00:09:22.578 =========================== 00:09:22.578 Number of FDP configurations: 1 00:09:22.578 Version: 0 00:09:22.578 Size: 112 00:09:22.578 FDP Configuration Descriptor: 0 00:09:22.578 Descriptor Size: 96 00:09:22.578 Reclaim Group Identifier format: 2 00:09:22.579 FDP Volatile Write Cache: Not Present 00:09:22.579 FDP Configuration: Valid 00:09:22.579 Vendor Specific Size: 0 00:09:22.579 Number of Reclaim Groups: 2 00:09:22.579 Number of Recalim Unit Handles: 8 00:09:22.579 Max Placement Identifiers: 128 00:09:22.579 Number of Namespaces Suppprted: 256 00:09:22.579 Reclaim unit Nominal Size: 6000000 bytes 00:09:22.579 Estimated Reclaim Unit Time Limit: Not Reported 00:09:22.579 RUH Desc #000: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #001: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #002: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #003: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #004: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #005: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #006: RUH Type: Initially Isolated 00:09:22.579 RUH Desc #007: RUH Type: Initially Isolated 00:09:22.579 00:09:22.579 FDP reclaim unit handle usage log page 00:09:22.579 ====================================== 00:09:22.579 Number of Reclaim Unit Handles: 8 00:09:22.579 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:22.579 RUH Usage Desc #001: RUH Attributes: Unused 00:09:22.579 RUH Usage Desc #002: RUH Attributes: Unused 00:09:22.579 RUH Usage Desc #003: RUH Attributes: Unused 00:09:22.579 RUH Usage Desc #004: RUH Attributes: Unused 00:09:22.579 RUH Usage Desc #005: RUH Attributes: Unused 00:09:22.579 RUH Usage Desc #006: RUH Attributes: Unused 00:09:22.579 RUH Usage Desc #007: RUH Attributes: Unused 00:09:22.579 00:09:22.579 FDP statistics log page 00:09:22.579 ======================= 00:09:22.579 Host bytes with metadata written: 432513024 00:09:22.579 Media bytes with metadata written: 432558080 00:09:22.579 Media bytes erased: 0 00:09:22.579 00:09:22.579 FDP events log page 00:09:22.579 =================== 00:09:22.579 Number of FDP events: 0 00:09:22.579 00:09:22.579 00:09:22.579 real 0m1.436s 00:09:22.579 user 0m0.586s 00:09:22.579 sys 0m0.667s 00:09:22.579 02:56:08 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.579 02:56:08 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:22.579 ************************************ 00:09:22.579 END TEST nvme_identify 00:09:22.579 ************************************ 00:09:22.579 02:56:08 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:22.579 02:56:08 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.579 02:56:08 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.579 02:56:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.579 ************************************ 00:09:22.579 START TEST nvme_perf 00:09:22.579 ************************************ 00:09:22.579 02:56:08 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:09:22.579 02:56:08 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:23.959 Initializing NVMe Controllers 00:09:23.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:23.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:23.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:23.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:23.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:23.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:23.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:23.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:23.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:23.959 Initialization complete. Launching workers. 00:09:23.959 ======================================================== 00:09:23.959 Latency(us) 00:09:23.959 Device Information : IOPS MiB/s Average min max 00:09:23.959 PCIE (0000:00:10.0) NSID 1 from core 0: 12721.26 149.08 10064.06 5951.07 36221.39 00:09:23.959 PCIE (0000:00:11.0) NSID 1 from core 0: 12721.26 149.08 10053.44 5688.04 35188.78 00:09:23.959 PCIE (0000:00:13.0) NSID 1 from core 0: 12721.26 149.08 10039.86 4924.33 34725.94 00:09:23.959 PCIE (0000:00:12.0) NSID 1 from core 0: 12785.18 149.83 9975.81 4595.07 28869.82 00:09:23.959 PCIE (0000:00:12.0) NSID 2 from core 0: 12785.18 149.83 9962.38 4299.69 27967.50 00:09:23.959 PCIE (0000:00:12.0) NSID 3 from core 0: 12785.18 149.83 9948.79 3893.12 27052.23 00:09:23.959 ======================================================== 00:09:23.959 Total : 76519.31 896.71 10007.28 3893.12 36221.39 00:09:23.959 00:09:23.959 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:23.959 ================================================================================= 00:09:23.959 1.00000% : 7983.476us 00:09:23.959 10.00000% : 8579.258us 00:09:23.959 25.00000% : 9115.462us 00:09:23.959 50.00000% : 9830.400us 00:09:23.959 75.00000% : 10545.338us 00:09:23.959 90.00000% : 11141.120us 00:09:23.959 95.00000% : 11617.745us 00:09:23.959 98.00000% : 13285.935us 00:09:23.959 99.00000% : 28597.527us 00:09:23.959 99.50000% : 34793.658us 00:09:23.959 99.90000% : 35985.222us 00:09:23.959 99.99000% : 36223.535us 00:09:23.959 99.99900% : 36223.535us 00:09:23.959 99.99990% : 36223.535us 00:09:23.959 99.99999% : 36223.535us 00:09:23.959 00:09:23.959 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:23.959 ================================================================================= 00:09:23.959 1.00000% : 7983.476us 00:09:23.959 10.00000% : 8638.836us 00:09:23.959 25.00000% : 9115.462us 00:09:23.959 50.00000% : 9889.978us 00:09:23.959 75.00000% : 10485.760us 00:09:23.959 90.00000% : 11081.542us 00:09:23.959 95.00000% : 11558.167us 00:09:23.959 98.00000% : 13285.935us 00:09:23.959 99.00000% : 28001.745us 00:09:23.959 99.50000% : 33840.407us 00:09:23.959 99.90000% : 35031.971us 00:09:23.959 99.99000% : 35270.284us 00:09:23.959 99.99900% : 35270.284us 00:09:23.959 99.99990% : 35270.284us 00:09:23.959 99.99999% : 35270.284us 00:09:23.959 00:09:23.959 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:23.959 ================================================================================= 00:09:23.959 1.00000% : 7923.898us 00:09:23.959 10.00000% : 8579.258us 00:09:23.960 25.00000% : 9115.462us 00:09:23.960 50.00000% : 9830.400us 00:09:23.960 75.00000% : 10545.338us 00:09:23.960 90.00000% : 11081.542us 00:09:23.960 95.00000% : 11558.167us 00:09:23.960 98.00000% : 13405.091us 00:09:23.960 99.00000% : 27525.120us 00:09:23.960 99.50000% : 33363.782us 00:09:23.960 99.90000% : 34555.345us 00:09:23.960 99.99000% : 34793.658us 00:09:23.960 99.99900% : 34793.658us 00:09:23.960 99.99990% : 34793.658us 00:09:23.960 99.99999% : 34793.658us 00:09:23.960 00:09:23.960 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:23.960 ================================================================================= 00:09:23.960 1.00000% : 7864.320us 00:09:23.960 10.00000% : 8579.258us 00:09:23.960 25.00000% : 9115.462us 00:09:23.960 50.00000% : 9889.978us 00:09:23.960 75.00000% : 10545.338us 00:09:23.960 90.00000% : 11081.542us 00:09:23.960 95.00000% : 11558.167us 00:09:23.960 98.00000% : 13226.356us 00:09:23.960 99.00000% : 14477.498us 00:09:23.960 99.50000% : 22758.865us 00:09:23.960 99.90000% : 28716.684us 00:09:23.960 99.99000% : 28954.996us 00:09:23.960 99.99900% : 28954.996us 00:09:23.960 99.99990% : 28954.996us 00:09:23.960 99.99999% : 28954.996us 00:09:23.960 00:09:23.960 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:23.960 ================================================================================= 00:09:23.960 1.00000% : 7626.007us 00:09:23.960 10.00000% : 8579.258us 00:09:23.960 25.00000% : 9115.462us 00:09:23.960 50.00000% : 9889.978us 00:09:23.960 75.00000% : 10485.760us 00:09:23.960 90.00000% : 11081.542us 00:09:23.960 95.00000% : 11558.167us 00:09:23.960 98.00000% : 13166.778us 00:09:23.960 99.00000% : 14417.920us 00:09:23.960 99.50000% : 21924.771us 00:09:23.960 99.90000% : 27763.433us 00:09:23.960 99.99000% : 28001.745us 00:09:23.960 99.99900% : 28001.745us 00:09:23.960 99.99990% : 28001.745us 00:09:23.960 99.99999% : 28001.745us 00:09:23.960 00:09:23.960 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:23.960 ================================================================================= 00:09:23.960 1.00000% : 7268.538us 00:09:23.960 10.00000% : 8579.258us 00:09:23.960 25.00000% : 9115.462us 00:09:23.960 50.00000% : 9889.978us 00:09:23.960 75.00000% : 10485.760us 00:09:23.960 90.00000% : 11081.542us 00:09:23.960 95.00000% : 11558.167us 00:09:23.960 98.00000% : 13166.778us 00:09:23.960 99.00000% : 14417.920us 00:09:23.960 99.50000% : 21090.676us 00:09:23.960 99.90000% : 26810.182us 00:09:23.960 99.99000% : 27048.495us 00:09:23.960 99.99900% : 27167.651us 00:09:23.960 99.99990% : 27167.651us 00:09:23.960 99.99999% : 27167.651us 00:09:23.960 00:09:23.960 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:23.960 ============================================================================== 00:09:23.960 Range in us Cumulative IO count 00:09:23.960 5928.029 - 5957.818: 0.0079% ( 1) 00:09:23.960 5957.818 - 5987.607: 0.0236% ( 2) 00:09:23.960 5987.607 - 6017.396: 0.0471% ( 3) 00:09:23.960 6017.396 - 6047.185: 0.0550% ( 1) 00:09:23.960 6047.185 - 6076.975: 0.0707% ( 2) 00:09:23.960 6076.975 - 6106.764: 0.0864% ( 2) 00:09:23.960 6106.764 - 6136.553: 0.1021% ( 2) 00:09:23.960 6136.553 - 6166.342: 0.1099% ( 1) 00:09:23.960 6166.342 - 6196.131: 0.1256% ( 2) 00:09:23.960 6196.131 - 6225.920: 0.1413% ( 2) 00:09:23.960 6225.920 - 6255.709: 0.1570% ( 2) 00:09:23.960 6255.709 - 6285.498: 0.1649% ( 1) 00:09:23.960 6285.498 - 6315.287: 0.1806% ( 2) 00:09:23.960 6315.287 - 6345.076: 0.2041% ( 3) 00:09:23.960 6345.076 - 6374.865: 0.2120% ( 1) 00:09:23.960 6374.865 - 6404.655: 0.2356% ( 3) 00:09:23.960 6434.444 - 6464.233: 0.2591% ( 3) 00:09:23.960 6464.233 - 6494.022: 0.2670% ( 1) 00:09:23.960 6494.022 - 6523.811: 0.2827% ( 2) 00:09:23.960 6523.811 - 6553.600: 0.2905% ( 1) 00:09:23.960 6553.600 - 6583.389: 0.3062% ( 2) 00:09:23.960 6583.389 - 6613.178: 0.3219% ( 2) 00:09:23.960 6613.178 - 6642.967: 0.3376% ( 2) 00:09:23.960 6642.967 - 6672.756: 0.3533% ( 2) 00:09:23.960 6672.756 - 6702.545: 0.3690% ( 2) 00:09:23.960 6702.545 - 6732.335: 0.3847% ( 2) 00:09:23.960 6732.335 - 6762.124: 0.4004% ( 2) 00:09:23.960 6762.124 - 6791.913: 0.4083% ( 1) 00:09:23.960 6791.913 - 6821.702: 0.4240% ( 2) 00:09:23.960 6821.702 - 6851.491: 0.4476% ( 3) 00:09:23.960 6851.491 - 6881.280: 0.4633% ( 2) 00:09:23.960 6881.280 - 6911.069: 0.4790% ( 2) 00:09:23.960 6911.069 - 6940.858: 0.4868% ( 1) 00:09:23.960 6940.858 - 6970.647: 0.5025% ( 2) 00:09:23.960 7685.585 - 7745.164: 0.5418% ( 5) 00:09:23.960 7745.164 - 7804.742: 0.6360% ( 12) 00:09:23.960 7804.742 - 7864.320: 0.8244% ( 24) 00:09:23.960 7864.320 - 7923.898: 0.9972% ( 22) 00:09:23.960 7923.898 - 7983.476: 1.2484% ( 32) 00:09:23.960 7983.476 - 8043.055: 1.5939% ( 44) 00:09:23.960 8043.055 - 8102.633: 2.0179% ( 54) 00:09:23.960 8102.633 - 8162.211: 2.6068% ( 75) 00:09:23.960 8162.211 - 8221.789: 3.3998% ( 101) 00:09:23.960 8221.789 - 8281.367: 4.3263% ( 118) 00:09:23.960 8281.367 - 8340.945: 5.3706% ( 133) 00:09:23.960 8340.945 - 8400.524: 6.4463% ( 137) 00:09:23.960 8400.524 - 8460.102: 7.7418% ( 165) 00:09:23.960 8460.102 - 8519.680: 9.0217% ( 163) 00:09:23.960 8519.680 - 8579.258: 10.4114% ( 177) 00:09:23.960 8579.258 - 8638.836: 11.8954% ( 189) 00:09:23.960 8638.836 - 8698.415: 13.3951% ( 191) 00:09:23.960 8698.415 - 8757.993: 15.0832% ( 215) 00:09:23.960 8757.993 - 8817.571: 16.7871% ( 217) 00:09:23.960 8817.571 - 8877.149: 18.5694% ( 227) 00:09:23.960 8877.149 - 8936.727: 20.3125% ( 222) 00:09:23.960 8936.727 - 8996.305: 22.1106% ( 229) 00:09:23.960 8996.305 - 9055.884: 24.0185% ( 243) 00:09:23.960 9055.884 - 9115.462: 25.8401% ( 232) 00:09:23.960 9115.462 - 9175.040: 27.6696% ( 233) 00:09:23.960 9175.040 - 9234.618: 29.5540% ( 240) 00:09:23.960 9234.618 - 9294.196: 31.4934% ( 247) 00:09:23.960 9294.196 - 9353.775: 33.2758% ( 227) 00:09:23.960 9353.775 - 9413.353: 35.1759% ( 242) 00:09:23.960 9413.353 - 9472.931: 37.1467% ( 251) 00:09:23.960 9472.931 - 9532.509: 39.2981% ( 274) 00:09:23.960 9532.509 - 9592.087: 41.3631% ( 263) 00:09:23.960 9592.087 - 9651.665: 43.4202% ( 262) 00:09:23.960 9651.665 - 9711.244: 45.6658% ( 286) 00:09:23.960 9711.244 - 9770.822: 47.8643% ( 280) 00:09:23.960 9770.822 - 9830.400: 50.1099% ( 286) 00:09:23.960 9830.400 - 9889.978: 52.3869% ( 290) 00:09:23.960 9889.978 - 9949.556: 54.7189% ( 297) 00:09:23.960 9949.556 - 10009.135: 56.9724% ( 287) 00:09:23.960 10009.135 - 10068.713: 59.1002% ( 271) 00:09:23.960 10068.713 - 10128.291: 61.3065% ( 281) 00:09:23.960 10128.291 - 10187.869: 63.4815% ( 277) 00:09:23.960 10187.869 - 10247.447: 65.6486% ( 276) 00:09:23.960 10247.447 - 10307.025: 67.9413% ( 292) 00:09:23.960 10307.025 - 10366.604: 70.1947% ( 287) 00:09:23.960 10366.604 - 10426.182: 72.2440% ( 261) 00:09:23.960 10426.182 - 10485.760: 74.4111% ( 276) 00:09:23.960 10485.760 - 10545.338: 76.3034% ( 241) 00:09:23.960 10545.338 - 10604.916: 78.2035% ( 242) 00:09:23.960 10604.916 - 10664.495: 79.9388% ( 221) 00:09:23.960 10664.495 - 10724.073: 81.5405% ( 204) 00:09:23.960 10724.073 - 10783.651: 83.1423% ( 204) 00:09:23.960 10783.651 - 10843.229: 84.5791% ( 183) 00:09:23.960 10843.229 - 10902.807: 85.9454% ( 174) 00:09:23.960 10902.807 - 10962.385: 87.2016% ( 160) 00:09:23.960 10962.385 - 11021.964: 88.3872% ( 151) 00:09:23.960 11021.964 - 11081.542: 89.4629% ( 137) 00:09:23.960 11081.542 - 11141.120: 90.3816% ( 117) 00:09:23.960 11141.120 - 11200.698: 91.2610% ( 112) 00:09:23.960 11200.698 - 11260.276: 91.9912% ( 93) 00:09:23.960 11260.276 - 11319.855: 92.7528% ( 97) 00:09:23.960 11319.855 - 11379.433: 93.4281% ( 86) 00:09:23.960 11379.433 - 11439.011: 94.0484% ( 79) 00:09:23.960 11439.011 - 11498.589: 94.5352% ( 62) 00:09:23.960 11498.589 - 11558.167: 94.8964% ( 46) 00:09:23.960 11558.167 - 11617.745: 95.2104% ( 40) 00:09:23.960 11617.745 - 11677.324: 95.4695% ( 33) 00:09:23.960 11677.324 - 11736.902: 95.6894% ( 28) 00:09:23.960 11736.902 - 11796.480: 95.9014% ( 27) 00:09:23.961 11796.480 - 11856.058: 96.0820% ( 23) 00:09:23.961 11856.058 - 11915.636: 96.2233% ( 18) 00:09:23.961 11915.636 - 11975.215: 96.3411% ( 15) 00:09:23.961 11975.215 - 12034.793: 96.4432% ( 13) 00:09:23.961 12034.793 - 12094.371: 96.5374% ( 12) 00:09:23.961 12094.371 - 12153.949: 96.6316% ( 12) 00:09:23.961 12153.949 - 12213.527: 96.7415% ( 14) 00:09:23.961 12213.527 - 12273.105: 96.8122% ( 9) 00:09:23.961 12273.105 - 12332.684: 96.8750% ( 8) 00:09:23.961 12332.684 - 12392.262: 96.9300% ( 7) 00:09:23.961 12392.262 - 12451.840: 97.0006% ( 9) 00:09:23.961 12451.840 - 12511.418: 97.0320% ( 4) 00:09:23.961 12511.418 - 12570.996: 97.1341% ( 13) 00:09:23.961 12570.996 - 12630.575: 97.2048% ( 9) 00:09:23.961 12630.575 - 12690.153: 97.3147% ( 14) 00:09:23.961 12690.153 - 12749.731: 97.3932% ( 10) 00:09:23.961 12749.731 - 12809.309: 97.4874% ( 12) 00:09:23.961 12809.309 - 12868.887: 97.5424% ( 7) 00:09:23.961 12868.887 - 12928.465: 97.6445% ( 13) 00:09:23.961 12928.465 - 12988.044: 97.6994% ( 7) 00:09:23.961 12988.044 - 13047.622: 97.7780% ( 10) 00:09:23.961 13047.622 - 13107.200: 97.8486% ( 9) 00:09:23.961 13107.200 - 13166.778: 97.9350% ( 11) 00:09:23.961 13166.778 - 13226.356: 97.9978% ( 8) 00:09:23.961 13226.356 - 13285.935: 98.0763% ( 10) 00:09:23.961 13285.935 - 13345.513: 98.1470% ( 9) 00:09:23.961 13345.513 - 13405.091: 98.2334% ( 11) 00:09:23.961 13405.091 - 13464.669: 98.3040% ( 9) 00:09:23.961 13464.669 - 13524.247: 98.3904% ( 11) 00:09:23.961 13524.247 - 13583.825: 98.4611% ( 9) 00:09:23.961 13583.825 - 13643.404: 98.5239% ( 8) 00:09:23.961 13643.404 - 13702.982: 98.6181% ( 12) 00:09:23.961 13702.982 - 13762.560: 98.6731% ( 7) 00:09:23.961 13762.560 - 13822.138: 98.7516% ( 10) 00:09:23.961 13822.138 - 13881.716: 98.8144% ( 8) 00:09:23.961 13881.716 - 13941.295: 98.8536% ( 5) 00:09:23.961 13941.295 - 14000.873: 98.8772% ( 3) 00:09:23.961 14000.873 - 14060.451: 98.9086% ( 4) 00:09:23.961 14060.451 - 14120.029: 98.9322% ( 3) 00:09:23.961 14120.029 - 14179.607: 98.9557% ( 3) 00:09:23.961 14179.607 - 14239.185: 98.9636% ( 1) 00:09:23.961 14239.185 - 14298.764: 98.9714% ( 1) 00:09:23.961 14298.764 - 14358.342: 98.9950% ( 3) 00:09:23.961 28478.371 - 28597.527: 99.0185% ( 3) 00:09:23.961 28597.527 - 28716.684: 99.0421% ( 3) 00:09:23.961 28716.684 - 28835.840: 99.0656% ( 3) 00:09:23.961 28835.840 - 28954.996: 99.1128% ( 6) 00:09:23.961 28954.996 - 29074.153: 99.1442% ( 4) 00:09:23.961 29074.153 - 29193.309: 99.1756% ( 4) 00:09:23.961 29193.309 - 29312.465: 99.2070% ( 4) 00:09:23.961 29312.465 - 29431.622: 99.2462% ( 5) 00:09:23.961 29431.622 - 29550.778: 99.2776% ( 4) 00:09:23.961 29550.778 - 29669.935: 99.3090% ( 4) 00:09:23.961 29669.935 - 29789.091: 99.3405% ( 4) 00:09:23.961 29789.091 - 29908.247: 99.3876% ( 6) 00:09:23.961 29908.247 - 30027.404: 99.4190% ( 4) 00:09:23.961 30027.404 - 30146.560: 99.4504% ( 4) 00:09:23.961 30146.560 - 30265.716: 99.4896% ( 5) 00:09:23.961 30265.716 - 30384.873: 99.4975% ( 1) 00:09:23.961 34555.345 - 34793.658: 99.5524% ( 7) 00:09:23.961 34793.658 - 35031.971: 99.6388% ( 11) 00:09:23.961 35031.971 - 35270.284: 99.7095% ( 9) 00:09:23.961 35270.284 - 35508.596: 99.7802% ( 9) 00:09:23.961 35508.596 - 35746.909: 99.8587% ( 10) 00:09:23.961 35746.909 - 35985.222: 99.9372% ( 10) 00:09:23.961 35985.222 - 36223.535: 100.0000% ( 8) 00:09:23.961 00:09:23.961 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:23.961 ============================================================================== 00:09:23.961 Range in us Cumulative IO count 00:09:23.961 5659.927 - 5689.716: 0.0079% ( 1) 00:09:23.961 5689.716 - 5719.505: 0.0236% ( 2) 00:09:23.961 5719.505 - 5749.295: 0.0393% ( 2) 00:09:23.961 5749.295 - 5779.084: 0.0550% ( 2) 00:09:23.961 5779.084 - 5808.873: 0.0785% ( 3) 00:09:23.961 5808.873 - 5838.662: 0.0942% ( 2) 00:09:23.961 5838.662 - 5868.451: 0.1099% ( 2) 00:09:23.961 5868.451 - 5898.240: 0.1335% ( 3) 00:09:23.961 5898.240 - 5928.029: 0.1492% ( 2) 00:09:23.961 5928.029 - 5957.818: 0.1649% ( 2) 00:09:23.961 5957.818 - 5987.607: 0.1884% ( 3) 00:09:23.961 5987.607 - 6017.396: 0.2041% ( 2) 00:09:23.961 6017.396 - 6047.185: 0.2198% ( 2) 00:09:23.961 6047.185 - 6076.975: 0.2356% ( 2) 00:09:23.961 6076.975 - 6106.764: 0.2513% ( 2) 00:09:23.961 6106.764 - 6136.553: 0.2670% ( 2) 00:09:23.961 6136.553 - 6166.342: 0.2827% ( 2) 00:09:23.961 6166.342 - 6196.131: 0.2984% ( 2) 00:09:23.961 6196.131 - 6225.920: 0.3141% ( 2) 00:09:23.961 6225.920 - 6255.709: 0.3298% ( 2) 00:09:23.961 6255.709 - 6285.498: 0.3533% ( 3) 00:09:23.961 6285.498 - 6315.287: 0.3690% ( 2) 00:09:23.961 6315.287 - 6345.076: 0.3847% ( 2) 00:09:23.961 6345.076 - 6374.865: 0.3926% ( 1) 00:09:23.961 6374.865 - 6404.655: 0.4161% ( 3) 00:09:23.961 6404.655 - 6434.444: 0.4318% ( 2) 00:09:23.961 6434.444 - 6464.233: 0.4554% ( 3) 00:09:23.961 6464.233 - 6494.022: 0.4711% ( 2) 00:09:23.961 6494.022 - 6523.811: 0.4868% ( 2) 00:09:23.961 6523.811 - 6553.600: 0.5025% ( 2) 00:09:23.961 7745.164 - 7804.742: 0.5182% ( 2) 00:09:23.961 7804.742 - 7864.320: 0.6281% ( 14) 00:09:23.961 7864.320 - 7923.898: 0.7695% ( 18) 00:09:23.961 7923.898 - 7983.476: 1.0207% ( 32) 00:09:23.961 7983.476 - 8043.055: 1.3191% ( 38) 00:09:23.961 8043.055 - 8102.633: 1.6567% ( 43) 00:09:23.961 8102.633 - 8162.211: 2.1906% ( 68) 00:09:23.961 8162.211 - 8221.789: 2.8266% ( 81) 00:09:23.961 8221.789 - 8281.367: 3.6589% ( 106) 00:09:23.961 8281.367 - 8340.945: 4.6168% ( 122) 00:09:23.961 8340.945 - 8400.524: 5.7710% ( 147) 00:09:23.961 8400.524 - 8460.102: 6.9802% ( 154) 00:09:23.961 8460.102 - 8519.680: 8.2758% ( 165) 00:09:23.961 8519.680 - 8579.258: 9.7676% ( 190) 00:09:23.961 8579.258 - 8638.836: 11.4322% ( 212) 00:09:23.961 8638.836 - 8698.415: 13.0810% ( 210) 00:09:23.961 8698.415 - 8757.993: 14.8320% ( 223) 00:09:23.961 8757.993 - 8817.571: 16.5515% ( 219) 00:09:23.961 8817.571 - 8877.149: 18.2789% ( 220) 00:09:23.961 8877.149 - 8936.727: 20.0612% ( 227) 00:09:23.961 8936.727 - 8996.305: 21.8986% ( 234) 00:09:23.961 8996.305 - 9055.884: 23.6731% ( 226) 00:09:23.961 9055.884 - 9115.462: 25.4868% ( 231) 00:09:23.961 9115.462 - 9175.040: 27.2142% ( 220) 00:09:23.961 9175.040 - 9234.618: 29.0201% ( 230) 00:09:23.961 9234.618 - 9294.196: 30.8417% ( 232) 00:09:23.961 9294.196 - 9353.775: 32.6790% ( 234) 00:09:23.961 9353.775 - 9413.353: 34.5556% ( 239) 00:09:23.961 9413.353 - 9472.931: 36.5342% ( 252) 00:09:23.961 9472.931 - 9532.509: 38.5992% ( 263) 00:09:23.961 9532.509 - 9592.087: 40.6407% ( 260) 00:09:23.961 9592.087 - 9651.665: 42.7685% ( 271) 00:09:23.961 9651.665 - 9711.244: 45.0220% ( 287) 00:09:23.961 9711.244 - 9770.822: 47.3068% ( 291) 00:09:23.961 9770.822 - 9830.400: 49.5367% ( 284) 00:09:23.961 9830.400 - 9889.978: 51.9394% ( 306) 00:09:23.961 9889.978 - 9949.556: 54.4441% ( 319) 00:09:23.961 9949.556 - 10009.135: 56.9488% ( 319) 00:09:23.961 10009.135 - 10068.713: 59.5006% ( 325) 00:09:23.961 10068.713 - 10128.291: 61.8012% ( 293) 00:09:23.961 10128.291 - 10187.869: 64.0861% ( 291) 00:09:23.961 10187.869 - 10247.447: 66.4573% ( 302) 00:09:23.961 10247.447 - 10307.025: 68.7107% ( 287) 00:09:23.961 10307.025 - 10366.604: 71.0349% ( 296) 00:09:23.961 10366.604 - 10426.182: 73.1470% ( 269) 00:09:23.961 10426.182 - 10485.760: 75.2513% ( 268) 00:09:23.961 10485.760 - 10545.338: 77.2613% ( 256) 00:09:23.961 10545.338 - 10604.916: 79.1693% ( 243) 00:09:23.961 10604.916 - 10664.495: 80.8731% ( 217) 00:09:23.961 10664.495 - 10724.073: 82.4121% ( 196) 00:09:23.961 10724.073 - 10783.651: 83.9039% ( 190) 00:09:23.961 10783.651 - 10843.229: 85.3643% ( 186) 00:09:23.961 10843.229 - 10902.807: 86.6991% ( 170) 00:09:23.961 10902.807 - 10962.385: 87.9161% ( 155) 00:09:23.961 10962.385 - 11021.964: 89.0232% ( 141) 00:09:23.961 11021.964 - 11081.542: 90.0754% ( 134) 00:09:23.961 11081.542 - 11141.120: 91.0254% ( 121) 00:09:23.961 11141.120 - 11200.698: 91.8656% ( 107) 00:09:23.961 11200.698 - 11260.276: 92.6822% ( 104) 00:09:23.961 11260.276 - 11319.855: 93.4830% ( 102) 00:09:23.961 11319.855 - 11379.433: 94.1269% ( 82) 00:09:23.961 11379.433 - 11439.011: 94.6530% ( 67) 00:09:23.961 11439.011 - 11498.589: 94.9984% ( 44) 00:09:23.961 11498.589 - 11558.167: 95.2889% ( 37) 00:09:23.961 11558.167 - 11617.745: 95.5323% ( 31) 00:09:23.961 11617.745 - 11677.324: 95.7522% ( 28) 00:09:23.961 11677.324 - 11736.902: 95.9249% ( 22) 00:09:23.962 11736.902 - 11796.480: 96.1055% ( 23) 00:09:23.962 11796.480 - 11856.058: 96.2547% ( 19) 00:09:23.962 11856.058 - 11915.636: 96.3803% ( 16) 00:09:23.962 11915.636 - 11975.215: 96.4981% ( 15) 00:09:23.962 11975.215 - 12034.793: 96.5845% ( 11) 00:09:23.962 12034.793 - 12094.371: 96.6630% ( 10) 00:09:23.962 12094.371 - 12153.949: 96.7337% ( 9) 00:09:23.962 12153.949 - 12213.527: 96.7808% ( 6) 00:09:23.962 12213.527 - 12273.105: 96.8279% ( 6) 00:09:23.962 12273.105 - 12332.684: 96.8750% ( 6) 00:09:23.962 12332.684 - 12392.262: 96.9378% ( 8) 00:09:23.962 12392.262 - 12451.840: 97.0006% ( 8) 00:09:23.962 12451.840 - 12511.418: 97.0320% ( 4) 00:09:23.962 12511.418 - 12570.996: 97.0634% ( 4) 00:09:23.962 12570.996 - 12630.575: 97.1027% ( 5) 00:09:23.962 12630.575 - 12690.153: 97.1655% ( 8) 00:09:23.962 12690.153 - 12749.731: 97.2362% ( 9) 00:09:23.962 12749.731 - 12809.309: 97.3068% ( 9) 00:09:23.962 12809.309 - 12868.887: 97.3854% ( 10) 00:09:23.962 12868.887 - 12928.465: 97.4796% ( 12) 00:09:23.962 12928.465 - 12988.044: 97.5738% ( 12) 00:09:23.962 12988.044 - 13047.622: 97.6523% ( 10) 00:09:23.962 13047.622 - 13107.200: 97.7622% ( 14) 00:09:23.962 13107.200 - 13166.778: 97.8486% ( 11) 00:09:23.962 13166.778 - 13226.356: 97.9507% ( 13) 00:09:23.962 13226.356 - 13285.935: 98.0528% ( 13) 00:09:23.962 13285.935 - 13345.513: 98.1391% ( 11) 00:09:23.962 13345.513 - 13405.091: 98.2255% ( 11) 00:09:23.962 13405.091 - 13464.669: 98.3276% ( 13) 00:09:23.962 13464.669 - 13524.247: 98.3982% ( 9) 00:09:23.962 13524.247 - 13583.825: 98.4846% ( 11) 00:09:23.962 13583.825 - 13643.404: 98.5710% ( 11) 00:09:23.962 13643.404 - 13702.982: 98.6652% ( 12) 00:09:23.962 13702.982 - 13762.560: 98.7437% ( 10) 00:09:23.962 13762.560 - 13822.138: 98.7830% ( 5) 00:09:23.962 13822.138 - 13881.716: 98.8222% ( 5) 00:09:23.962 13881.716 - 13941.295: 98.8458% ( 3) 00:09:23.962 13941.295 - 14000.873: 98.8615% ( 2) 00:09:23.962 14000.873 - 14060.451: 98.8851% ( 3) 00:09:23.962 14060.451 - 14120.029: 98.9008% ( 2) 00:09:23.962 14120.029 - 14179.607: 98.9243% ( 3) 00:09:23.962 14179.607 - 14239.185: 98.9400% ( 2) 00:09:23.962 14239.185 - 14298.764: 98.9636% ( 3) 00:09:23.962 14298.764 - 14358.342: 98.9793% ( 2) 00:09:23.962 14358.342 - 14417.920: 98.9950% ( 2) 00:09:23.962 27882.589 - 28001.745: 99.0185% ( 3) 00:09:23.962 28001.745 - 28120.902: 99.0499% ( 4) 00:09:23.962 28120.902 - 28240.058: 99.0892% ( 5) 00:09:23.962 28240.058 - 28359.215: 99.1285% ( 5) 00:09:23.962 28359.215 - 28478.371: 99.1756% ( 6) 00:09:23.962 28478.371 - 28597.527: 99.2148% ( 5) 00:09:23.962 28597.527 - 28716.684: 99.2541% ( 5) 00:09:23.962 28716.684 - 28835.840: 99.2933% ( 5) 00:09:23.962 28835.840 - 28954.996: 99.3326% ( 5) 00:09:23.962 28954.996 - 29074.153: 99.3719% ( 5) 00:09:23.962 29074.153 - 29193.309: 99.4111% ( 5) 00:09:23.962 29193.309 - 29312.465: 99.4504% ( 5) 00:09:23.962 29312.465 - 29431.622: 99.4975% ( 6) 00:09:23.962 33602.095 - 33840.407: 99.5210% ( 3) 00:09:23.962 33840.407 - 34078.720: 99.6153% ( 12) 00:09:23.962 34078.720 - 34317.033: 99.6781% ( 8) 00:09:23.962 34317.033 - 34555.345: 99.7644% ( 11) 00:09:23.962 34555.345 - 34793.658: 99.8430% ( 10) 00:09:23.962 34793.658 - 35031.971: 99.9372% ( 12) 00:09:23.962 35031.971 - 35270.284: 100.0000% ( 8) 00:09:23.962 00:09:23.962 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:23.962 ============================================================================== 00:09:23.962 Range in us Cumulative IO count 00:09:23.962 4915.200 - 4944.989: 0.0236% ( 3) 00:09:23.962 4944.989 - 4974.778: 0.0393% ( 2) 00:09:23.962 4974.778 - 5004.567: 0.0550% ( 2) 00:09:23.962 5004.567 - 5034.356: 0.0785% ( 3) 00:09:23.962 5034.356 - 5064.145: 0.0942% ( 2) 00:09:23.962 5064.145 - 5093.935: 0.1099% ( 2) 00:09:23.962 5093.935 - 5123.724: 0.1335% ( 3) 00:09:23.962 5123.724 - 5153.513: 0.1492% ( 2) 00:09:23.962 5153.513 - 5183.302: 0.1649% ( 2) 00:09:23.962 5183.302 - 5213.091: 0.1806% ( 2) 00:09:23.962 5213.091 - 5242.880: 0.2041% ( 3) 00:09:23.962 5242.880 - 5272.669: 0.2198% ( 2) 00:09:23.962 5272.669 - 5302.458: 0.2356% ( 2) 00:09:23.962 5302.458 - 5332.247: 0.2434% ( 1) 00:09:23.962 5332.247 - 5362.036: 0.2513% ( 1) 00:09:23.962 5362.036 - 5391.825: 0.2670% ( 2) 00:09:23.962 5391.825 - 5421.615: 0.2827% ( 2) 00:09:23.962 5421.615 - 5451.404: 0.3062% ( 3) 00:09:23.962 5451.404 - 5481.193: 0.3219% ( 2) 00:09:23.962 5481.193 - 5510.982: 0.3376% ( 2) 00:09:23.962 5510.982 - 5540.771: 0.3612% ( 3) 00:09:23.962 5540.771 - 5570.560: 0.3769% ( 2) 00:09:23.962 5570.560 - 5600.349: 0.3926% ( 2) 00:09:23.962 5600.349 - 5630.138: 0.4083% ( 2) 00:09:23.962 5630.138 - 5659.927: 0.4318% ( 3) 00:09:23.962 5659.927 - 5689.716: 0.4476% ( 2) 00:09:23.962 5689.716 - 5719.505: 0.4633% ( 2) 00:09:23.962 5719.505 - 5749.295: 0.4868% ( 3) 00:09:23.962 5749.295 - 5779.084: 0.5025% ( 2) 00:09:23.962 7357.905 - 7387.695: 0.5104% ( 1) 00:09:23.962 7387.695 - 7417.484: 0.5261% ( 2) 00:09:23.962 7417.484 - 7447.273: 0.5418% ( 2) 00:09:23.962 7447.273 - 7477.062: 0.5653% ( 3) 00:09:23.962 7477.062 - 7506.851: 0.5810% ( 2) 00:09:23.962 7506.851 - 7536.640: 0.6046% ( 3) 00:09:23.962 7536.640 - 7566.429: 0.6203% ( 2) 00:09:23.962 7566.429 - 7596.218: 0.6438% ( 3) 00:09:23.962 7596.218 - 7626.007: 0.6595% ( 2) 00:09:23.962 7626.007 - 7685.585: 0.6988% ( 5) 00:09:23.962 7685.585 - 7745.164: 0.7302% ( 4) 00:09:23.962 7745.164 - 7804.742: 0.8009% ( 9) 00:09:23.962 7804.742 - 7864.320: 0.9579% ( 20) 00:09:23.962 7864.320 - 7923.898: 1.1542% ( 25) 00:09:23.962 7923.898 - 7983.476: 1.4290% ( 35) 00:09:23.962 7983.476 - 8043.055: 1.8059% ( 48) 00:09:23.962 8043.055 - 8102.633: 2.2220% ( 53) 00:09:23.962 8102.633 - 8162.211: 2.7481% ( 67) 00:09:23.962 8162.211 - 8221.789: 3.4234% ( 86) 00:09:23.962 8221.789 - 8281.367: 4.2399% ( 104) 00:09:23.962 8281.367 - 8340.945: 5.1900% ( 121) 00:09:23.962 8340.945 - 8400.524: 6.2500% ( 135) 00:09:23.962 8400.524 - 8460.102: 7.4121% ( 148) 00:09:23.962 8460.102 - 8519.680: 8.7155% ( 166) 00:09:23.962 8519.680 - 8579.258: 10.0817% ( 174) 00:09:23.962 8579.258 - 8638.836: 11.6285% ( 197) 00:09:23.962 8638.836 - 8698.415: 13.2616% ( 208) 00:09:23.962 8698.415 - 8757.993: 14.9105% ( 210) 00:09:23.962 8757.993 - 8817.571: 16.5437% ( 208) 00:09:23.962 8817.571 - 8877.149: 18.3653% ( 232) 00:09:23.962 8877.149 - 8936.727: 20.0769% ( 218) 00:09:23.962 8936.727 - 8996.305: 21.8750% ( 229) 00:09:23.962 8996.305 - 9055.884: 23.7045% ( 233) 00:09:23.962 9055.884 - 9115.462: 25.5810% ( 239) 00:09:23.962 9115.462 - 9175.040: 27.4262% ( 235) 00:09:23.962 9175.040 - 9234.618: 29.3185% ( 241) 00:09:23.962 9234.618 - 9294.196: 31.2421% ( 245) 00:09:23.962 9294.196 - 9353.775: 33.1501% ( 243) 00:09:23.962 9353.775 - 9413.353: 35.0110% ( 237) 00:09:23.962 9413.353 - 9472.931: 37.1231% ( 269) 00:09:23.962 9472.931 - 9532.509: 39.2431% ( 270) 00:09:23.962 9532.509 - 9592.087: 41.4808% ( 285) 00:09:23.962 9592.087 - 9651.665: 43.6872% ( 281) 00:09:23.962 9651.665 - 9711.244: 45.8621% ( 277) 00:09:23.962 9711.244 - 9770.822: 48.0920% ( 284) 00:09:23.962 9770.822 - 9830.400: 50.2670% ( 277) 00:09:23.962 9830.400 - 9889.978: 52.3869% ( 270) 00:09:23.962 9889.978 - 9949.556: 54.5854% ( 280) 00:09:23.962 9949.556 - 10009.135: 56.8389% ( 287) 00:09:23.962 10009.135 - 10068.713: 59.1237% ( 291) 00:09:23.962 10068.713 - 10128.291: 61.4950% ( 302) 00:09:23.962 10128.291 - 10187.869: 63.8269% ( 297) 00:09:23.962 10187.869 - 10247.447: 66.1118% ( 291) 00:09:23.962 10247.447 - 10307.025: 68.3417% ( 284) 00:09:23.962 10307.025 - 10366.604: 70.5559% ( 282) 00:09:23.962 10366.604 - 10426.182: 72.6916% ( 272) 00:09:23.962 10426.182 - 10485.760: 74.8194% ( 271) 00:09:23.962 10485.760 - 10545.338: 76.8452% ( 258) 00:09:23.962 10545.338 - 10604.916: 78.7453% ( 242) 00:09:23.962 10604.916 - 10664.495: 80.5119% ( 225) 00:09:23.962 10664.495 - 10724.073: 82.2550% ( 222) 00:09:23.962 10724.073 - 10783.651: 83.8175% ( 199) 00:09:23.962 10783.651 - 10843.229: 85.3329% ( 193) 00:09:23.962 10843.229 - 10902.807: 86.6128% ( 163) 00:09:23.963 10902.807 - 10962.385: 87.9083% ( 165) 00:09:23.963 10962.385 - 11021.964: 88.9683% ( 135) 00:09:23.963 11021.964 - 11081.542: 90.0440% ( 137) 00:09:23.963 11081.542 - 11141.120: 90.9783% ( 119) 00:09:23.963 11141.120 - 11200.698: 91.8734% ( 114) 00:09:23.963 11200.698 - 11260.276: 92.7136% ( 107) 00:09:23.963 11260.276 - 11319.855: 93.4673% ( 96) 00:09:23.963 11319.855 - 11379.433: 94.0013% ( 68) 00:09:23.963 11379.433 - 11439.011: 94.4488% ( 57) 00:09:23.963 11439.011 - 11498.589: 94.8414% ( 50) 00:09:23.963 11498.589 - 11558.167: 95.1633% ( 41) 00:09:23.963 11558.167 - 11617.745: 95.4303% ( 34) 00:09:23.963 11617.745 - 11677.324: 95.6737% ( 31) 00:09:23.963 11677.324 - 11736.902: 95.9092% ( 30) 00:09:23.963 11736.902 - 11796.480: 96.1291% ( 28) 00:09:23.963 11796.480 - 11856.058: 96.2861% ( 20) 00:09:23.963 11856.058 - 11915.636: 96.4353% ( 19) 00:09:23.963 11915.636 - 11975.215: 96.5845% ( 19) 00:09:23.963 11975.215 - 12034.793: 96.6866% ( 13) 00:09:23.963 12034.793 - 12094.371: 96.7337% ( 6) 00:09:23.963 12094.371 - 12153.949: 96.7808% ( 6) 00:09:23.963 12153.949 - 12213.527: 96.8200% ( 5) 00:09:23.963 12213.527 - 12273.105: 96.8671% ( 6) 00:09:23.963 12273.105 - 12332.684: 96.9064% ( 5) 00:09:23.963 12332.684 - 12392.262: 96.9457% ( 5) 00:09:23.963 12392.262 - 12451.840: 96.9692% ( 3) 00:09:23.963 12451.840 - 12511.418: 96.9928% ( 3) 00:09:23.963 12511.418 - 12570.996: 97.0242% ( 4) 00:09:23.963 12570.996 - 12630.575: 97.0556% ( 4) 00:09:23.963 12630.575 - 12690.153: 97.1106% ( 7) 00:09:23.963 12690.153 - 12749.731: 97.1891% ( 10) 00:09:23.963 12749.731 - 12809.309: 97.2597% ( 9) 00:09:23.963 12809.309 - 12868.887: 97.3383% ( 10) 00:09:23.963 12868.887 - 12928.465: 97.4089% ( 9) 00:09:23.963 12928.465 - 12988.044: 97.4796% ( 9) 00:09:23.963 12988.044 - 13047.622: 97.5581% ( 10) 00:09:23.963 13047.622 - 13107.200: 97.6288% ( 9) 00:09:23.963 13107.200 - 13166.778: 97.7073% ( 10) 00:09:23.963 13166.778 - 13226.356: 97.7858% ( 10) 00:09:23.963 13226.356 - 13285.935: 97.8879% ( 13) 00:09:23.963 13285.935 - 13345.513: 97.9742% ( 11) 00:09:23.963 13345.513 - 13405.091: 98.0920% ( 15) 00:09:23.963 13405.091 - 13464.669: 98.1784% ( 11) 00:09:23.963 13464.669 - 13524.247: 98.2962% ( 15) 00:09:23.963 13524.247 - 13583.825: 98.3825% ( 11) 00:09:23.963 13583.825 - 13643.404: 98.5003% ( 15) 00:09:23.963 13643.404 - 13702.982: 98.5867% ( 11) 00:09:23.963 13702.982 - 13762.560: 98.6731% ( 11) 00:09:23.963 13762.560 - 13822.138: 98.7202% ( 6) 00:09:23.963 13822.138 - 13881.716: 98.7751% ( 7) 00:09:23.963 13881.716 - 13941.295: 98.8144% ( 5) 00:09:23.963 13941.295 - 14000.873: 98.8536% ( 5) 00:09:23.963 14000.873 - 14060.451: 98.8772% ( 3) 00:09:23.963 14060.451 - 14120.029: 98.9086% ( 4) 00:09:23.963 14120.029 - 14179.607: 98.9322% ( 3) 00:09:23.963 14179.607 - 14239.185: 98.9636% ( 4) 00:09:23.963 14239.185 - 14298.764: 98.9871% ( 3) 00:09:23.963 14298.764 - 14358.342: 98.9950% ( 1) 00:09:23.963 27405.964 - 27525.120: 99.0028% ( 1) 00:09:23.963 27525.120 - 27644.276: 99.0342% ( 4) 00:09:23.963 27644.276 - 27763.433: 99.0735% ( 5) 00:09:23.963 27763.433 - 27882.589: 99.1206% ( 6) 00:09:23.963 27882.589 - 28001.745: 99.1599% ( 5) 00:09:23.963 28001.745 - 28120.902: 99.1991% ( 5) 00:09:23.963 28120.902 - 28240.058: 99.2384% ( 5) 00:09:23.963 28240.058 - 28359.215: 99.2776% ( 5) 00:09:23.963 28359.215 - 28478.371: 99.3247% ( 6) 00:09:23.963 28478.371 - 28597.527: 99.3719% ( 6) 00:09:23.963 28597.527 - 28716.684: 99.4033% ( 4) 00:09:23.963 28716.684 - 28835.840: 99.4425% ( 5) 00:09:23.963 28835.840 - 28954.996: 99.4896% ( 6) 00:09:23.963 28954.996 - 29074.153: 99.4975% ( 1) 00:09:23.963 33125.469 - 33363.782: 99.5210% ( 3) 00:09:23.963 33363.782 - 33602.095: 99.5996% ( 10) 00:09:23.963 33602.095 - 33840.407: 99.6781% ( 10) 00:09:23.963 33840.407 - 34078.720: 99.7566% ( 10) 00:09:23.963 34078.720 - 34317.033: 99.8430% ( 11) 00:09:23.963 34317.033 - 34555.345: 99.9293% ( 11) 00:09:23.963 34555.345 - 34793.658: 100.0000% ( 9) 00:09:23.963 00:09:23.963 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:23.963 ============================================================================== 00:09:23.963 Range in us Cumulative IO count 00:09:23.963 4587.520 - 4617.309: 0.0156% ( 2) 00:09:23.963 4617.309 - 4647.098: 0.0391% ( 3) 00:09:23.963 4647.098 - 4676.887: 0.0469% ( 1) 00:09:23.963 4676.887 - 4706.676: 0.0703% ( 3) 00:09:23.963 4706.676 - 4736.465: 0.0938% ( 3) 00:09:23.963 4736.465 - 4766.255: 0.1094% ( 2) 00:09:23.963 4766.255 - 4796.044: 0.1250% ( 2) 00:09:23.963 4796.044 - 4825.833: 0.1484% ( 3) 00:09:23.963 4825.833 - 4855.622: 0.1562% ( 1) 00:09:23.963 4855.622 - 4885.411: 0.1719% ( 2) 00:09:23.963 4885.411 - 4915.200: 0.1875% ( 2) 00:09:23.963 4915.200 - 4944.989: 0.2031% ( 2) 00:09:23.963 4944.989 - 4974.778: 0.2266% ( 3) 00:09:23.963 4974.778 - 5004.567: 0.2422% ( 2) 00:09:23.963 5004.567 - 5034.356: 0.2578% ( 2) 00:09:23.963 5034.356 - 5064.145: 0.2812% ( 3) 00:09:23.963 5064.145 - 5093.935: 0.2891% ( 1) 00:09:23.963 5093.935 - 5123.724: 0.3047% ( 2) 00:09:23.963 5123.724 - 5153.513: 0.3203% ( 2) 00:09:23.963 5153.513 - 5183.302: 0.3359% ( 2) 00:09:23.963 5183.302 - 5213.091: 0.3516% ( 2) 00:09:23.963 5213.091 - 5242.880: 0.3750% ( 3) 00:09:23.963 5242.880 - 5272.669: 0.3906% ( 2) 00:09:23.963 5272.669 - 5302.458: 0.4062% ( 2) 00:09:23.963 5302.458 - 5332.247: 0.4141% ( 1) 00:09:23.963 5332.247 - 5362.036: 0.4297% ( 2) 00:09:23.963 5362.036 - 5391.825: 0.4453% ( 2) 00:09:23.963 5391.825 - 5421.615: 0.4688% ( 3) 00:09:23.963 5421.615 - 5451.404: 0.4844% ( 2) 00:09:23.963 5451.404 - 5481.193: 0.5000% ( 2) 00:09:23.963 7030.225 - 7060.015: 0.5156% ( 2) 00:09:23.963 7060.015 - 7089.804: 0.5312% ( 2) 00:09:23.963 7089.804 - 7119.593: 0.5391% ( 1) 00:09:23.963 7119.593 - 7149.382: 0.5703% ( 4) 00:09:23.963 7149.382 - 7179.171: 0.5859% ( 2) 00:09:23.963 7179.171 - 7208.960: 0.6016% ( 2) 00:09:23.963 7208.960 - 7238.749: 0.6250% ( 3) 00:09:23.963 7238.749 - 7268.538: 0.6406% ( 2) 00:09:23.963 7268.538 - 7298.327: 0.6641% ( 3) 00:09:23.963 7298.327 - 7328.116: 0.6797% ( 2) 00:09:23.963 7328.116 - 7357.905: 0.6953% ( 2) 00:09:23.963 7357.905 - 7387.695: 0.7109% ( 2) 00:09:23.963 7387.695 - 7417.484: 0.7266% ( 2) 00:09:23.963 7417.484 - 7447.273: 0.7422% ( 2) 00:09:23.963 7447.273 - 7477.062: 0.7578% ( 2) 00:09:23.963 7477.062 - 7506.851: 0.7734% ( 2) 00:09:23.963 7506.851 - 7536.640: 0.7812% ( 1) 00:09:23.963 7536.640 - 7566.429: 0.7969% ( 2) 00:09:23.963 7566.429 - 7596.218: 0.8203% ( 3) 00:09:23.963 7596.218 - 7626.007: 0.8359% ( 2) 00:09:23.963 7626.007 - 7685.585: 0.8750% ( 5) 00:09:23.963 7685.585 - 7745.164: 0.9062% ( 4) 00:09:23.963 7745.164 - 7804.742: 0.9844% ( 10) 00:09:23.963 7804.742 - 7864.320: 1.1484% ( 21) 00:09:23.963 7864.320 - 7923.898: 1.3516% ( 26) 00:09:23.963 7923.898 - 7983.476: 1.5938% ( 31) 00:09:23.963 7983.476 - 8043.055: 1.9531% ( 46) 00:09:23.963 8043.055 - 8102.633: 2.3438% ( 50) 00:09:23.963 8102.633 - 8162.211: 2.8672% ( 67) 00:09:23.963 8162.211 - 8221.789: 3.4844% ( 79) 00:09:23.963 8221.789 - 8281.367: 4.3281% ( 108) 00:09:23.963 8281.367 - 8340.945: 5.2734% ( 121) 00:09:23.963 8340.945 - 8400.524: 6.4219% ( 147) 00:09:23.963 8400.524 - 8460.102: 7.6562% ( 158) 00:09:23.963 8460.102 - 8519.680: 9.0312% ( 176) 00:09:23.963 8519.680 - 8579.258: 10.4531% ( 182) 00:09:23.963 8579.258 - 8638.836: 11.9766% ( 195) 00:09:23.963 8638.836 - 8698.415: 13.5000% ( 195) 00:09:23.963 8698.415 - 8757.993: 15.1719% ( 214) 00:09:23.963 8757.993 - 8817.571: 16.7812% ( 206) 00:09:23.963 8817.571 - 8877.149: 18.5156% ( 222) 00:09:23.963 8877.149 - 8936.727: 20.2188% ( 218) 00:09:23.963 8936.727 - 8996.305: 21.9453% ( 221) 00:09:23.963 8996.305 - 9055.884: 23.7422% ( 230) 00:09:23.963 9055.884 - 9115.462: 25.5469% ( 231) 00:09:23.963 9115.462 - 9175.040: 27.3984% ( 237) 00:09:23.963 9175.040 - 9234.618: 29.2656% ( 239) 00:09:23.964 9234.618 - 9294.196: 31.0625% ( 230) 00:09:23.964 9294.196 - 9353.775: 32.8047% ( 223) 00:09:23.964 9353.775 - 9413.353: 34.5469% ( 223) 00:09:23.964 9413.353 - 9472.931: 36.4453% ( 243) 00:09:23.964 9472.931 - 9532.509: 38.4609% ( 258) 00:09:23.964 9532.509 - 9592.087: 40.5703% ( 270) 00:09:23.964 9592.087 - 9651.665: 42.7422% ( 278) 00:09:23.964 9651.665 - 9711.244: 44.9219% ( 279) 00:09:23.964 9711.244 - 9770.822: 47.0625% ( 274) 00:09:23.964 9770.822 - 9830.400: 49.2812% ( 284) 00:09:23.964 9830.400 - 9889.978: 51.6328% ( 301) 00:09:23.964 9889.978 - 9949.556: 53.9219% ( 293) 00:09:23.964 9949.556 - 10009.135: 56.2578% ( 299) 00:09:23.964 10009.135 - 10068.713: 58.5703% ( 296) 00:09:23.964 10068.713 - 10128.291: 60.8984% ( 298) 00:09:23.964 10128.291 - 10187.869: 63.3359% ( 312) 00:09:23.964 10187.869 - 10247.447: 65.8125% ( 317) 00:09:23.964 10247.447 - 10307.025: 68.1953% ( 305) 00:09:23.964 10307.025 - 10366.604: 70.4375% ( 287) 00:09:23.964 10366.604 - 10426.182: 72.6797% ( 287) 00:09:23.964 10426.182 - 10485.760: 74.9375% ( 289) 00:09:23.964 10485.760 - 10545.338: 76.9844% ( 262) 00:09:23.964 10545.338 - 10604.916: 79.0156% ( 260) 00:09:23.964 10604.916 - 10664.495: 80.8594% ( 236) 00:09:23.964 10664.495 - 10724.073: 82.5938% ( 222) 00:09:23.964 10724.073 - 10783.651: 84.2109% ( 207) 00:09:23.964 10783.651 - 10843.229: 85.7188% ( 193) 00:09:23.964 10843.229 - 10902.807: 87.1250% ( 180) 00:09:23.964 10902.807 - 10962.385: 88.3438% ( 156) 00:09:23.964 10962.385 - 11021.964: 89.5234% ( 151) 00:09:23.964 11021.964 - 11081.542: 90.4766% ( 122) 00:09:23.964 11081.542 - 11141.120: 91.4062% ( 119) 00:09:23.964 11141.120 - 11200.698: 92.1797% ( 99) 00:09:23.964 11200.698 - 11260.276: 92.9609% ( 100) 00:09:23.964 11260.276 - 11319.855: 93.6016% ( 82) 00:09:23.964 11319.855 - 11379.433: 94.1094% ( 65) 00:09:23.964 11379.433 - 11439.011: 94.5312% ( 54) 00:09:23.964 11439.011 - 11498.589: 94.8750% ( 44) 00:09:23.964 11498.589 - 11558.167: 95.1875% ( 40) 00:09:23.964 11558.167 - 11617.745: 95.4141% ( 29) 00:09:23.964 11617.745 - 11677.324: 95.6094% ( 25) 00:09:23.964 11677.324 - 11736.902: 95.7734% ( 21) 00:09:23.964 11736.902 - 11796.480: 95.9375% ( 21) 00:09:23.964 11796.480 - 11856.058: 96.0781% ( 18) 00:09:23.964 11856.058 - 11915.636: 96.2266% ( 19) 00:09:23.964 11915.636 - 11975.215: 96.3438% ( 15) 00:09:23.964 11975.215 - 12034.793: 96.4609% ( 15) 00:09:23.964 12034.793 - 12094.371: 96.5547% ( 12) 00:09:23.964 12094.371 - 12153.949: 96.6562% ( 13) 00:09:23.964 12153.949 - 12213.527: 96.7344% ( 10) 00:09:23.964 12213.527 - 12273.105: 96.8047% ( 9) 00:09:23.964 12273.105 - 12332.684: 96.8672% ( 8) 00:09:23.964 12332.684 - 12392.262: 96.9141% ( 6) 00:09:23.964 12392.262 - 12451.840: 96.9609% ( 6) 00:09:23.964 12451.840 - 12511.418: 97.0078% ( 6) 00:09:23.964 12511.418 - 12570.996: 97.0547% ( 6) 00:09:23.964 12570.996 - 12630.575: 97.1250% ( 9) 00:09:23.964 12630.575 - 12690.153: 97.2031% ( 10) 00:09:23.964 12690.153 - 12749.731: 97.2969% ( 12) 00:09:23.964 12749.731 - 12809.309: 97.3828% ( 11) 00:09:23.964 12809.309 - 12868.887: 97.4922% ( 14) 00:09:23.964 12868.887 - 12928.465: 97.5781% ( 11) 00:09:23.964 12928.465 - 12988.044: 97.6797% ( 13) 00:09:23.964 12988.044 - 13047.622: 97.7500% ( 9) 00:09:23.964 13047.622 - 13107.200: 97.8516% ( 13) 00:09:23.964 13107.200 - 13166.778: 97.9453% ( 12) 00:09:23.964 13166.778 - 13226.356: 98.0391% ( 12) 00:09:23.964 13226.356 - 13285.935: 98.1250% ( 11) 00:09:23.964 13285.935 - 13345.513: 98.2109% ( 11) 00:09:23.964 13345.513 - 13405.091: 98.2969% ( 11) 00:09:23.964 13405.091 - 13464.669: 98.3906% ( 12) 00:09:23.964 13464.669 - 13524.247: 98.4766% ( 11) 00:09:23.964 13524.247 - 13583.825: 98.5781% ( 13) 00:09:23.964 13583.825 - 13643.404: 98.6641% ( 11) 00:09:23.964 13643.404 - 13702.982: 98.7188% ( 7) 00:09:23.964 13702.982 - 13762.560: 98.7656% ( 6) 00:09:23.964 13762.560 - 13822.138: 98.7969% ( 4) 00:09:23.964 13822.138 - 13881.716: 98.8125% ( 2) 00:09:23.964 13881.716 - 13941.295: 98.8359% ( 3) 00:09:23.964 13941.295 - 14000.873: 98.8516% ( 2) 00:09:23.964 14000.873 - 14060.451: 98.8750% ( 3) 00:09:23.964 14060.451 - 14120.029: 98.8906% ( 2) 00:09:23.964 14120.029 - 14179.607: 98.9141% ( 3) 00:09:23.964 14179.607 - 14239.185: 98.9297% ( 2) 00:09:23.964 14239.185 - 14298.764: 98.9531% ( 3) 00:09:23.964 14298.764 - 14358.342: 98.9688% ( 2) 00:09:23.964 14358.342 - 14417.920: 98.9922% ( 3) 00:09:23.964 14417.920 - 14477.498: 99.0000% ( 1) 00:09:23.964 21090.676 - 21209.833: 99.0234% ( 3) 00:09:23.964 21209.833 - 21328.989: 99.0547% ( 4) 00:09:23.964 21328.989 - 21448.145: 99.0938% ( 5) 00:09:23.964 21448.145 - 21567.302: 99.1328% ( 5) 00:09:23.964 21567.302 - 21686.458: 99.1719% ( 5) 00:09:23.964 21686.458 - 21805.615: 99.2109% ( 5) 00:09:23.964 21805.615 - 21924.771: 99.2422% ( 4) 00:09:23.964 21924.771 - 22043.927: 99.2812% ( 5) 00:09:23.964 22043.927 - 22163.084: 99.3281% ( 6) 00:09:23.964 22163.084 - 22282.240: 99.3594% ( 4) 00:09:23.964 22282.240 - 22401.396: 99.3984% ( 5) 00:09:23.964 22401.396 - 22520.553: 99.4375% ( 5) 00:09:23.964 22520.553 - 22639.709: 99.4766% ( 5) 00:09:23.964 22639.709 - 22758.865: 99.5000% ( 3) 00:09:23.964 27405.964 - 27525.120: 99.5391% ( 5) 00:09:23.964 27525.120 - 27644.276: 99.5781% ( 5) 00:09:23.964 27644.276 - 27763.433: 99.6172% ( 5) 00:09:23.964 27763.433 - 27882.589: 99.6641% ( 6) 00:09:23.964 27882.589 - 28001.745: 99.7031% ( 5) 00:09:23.964 28001.745 - 28120.902: 99.7344% ( 4) 00:09:23.964 28120.902 - 28240.058: 99.7734% ( 5) 00:09:23.964 28240.058 - 28359.215: 99.8125% ( 5) 00:09:23.964 28359.215 - 28478.371: 99.8516% ( 5) 00:09:23.964 28478.371 - 28597.527: 99.8984% ( 6) 00:09:23.964 28597.527 - 28716.684: 99.9375% ( 5) 00:09:23.964 28716.684 - 28835.840: 99.9844% ( 6) 00:09:23.964 28835.840 - 28954.996: 100.0000% ( 2) 00:09:23.964 00:09:23.964 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:23.964 ============================================================================== 00:09:23.964 Range in us Cumulative IO count 00:09:23.964 4289.629 - 4319.418: 0.0156% ( 2) 00:09:23.964 4319.418 - 4349.207: 0.0312% ( 2) 00:09:23.964 4349.207 - 4378.996: 0.0625% ( 4) 00:09:23.964 4378.996 - 4408.785: 0.0781% ( 2) 00:09:23.964 4408.785 - 4438.575: 0.1016% ( 3) 00:09:23.964 4438.575 - 4468.364: 0.1172% ( 2) 00:09:23.964 4468.364 - 4498.153: 0.1328% ( 2) 00:09:23.964 4498.153 - 4527.942: 0.1484% ( 2) 00:09:23.964 4527.942 - 4557.731: 0.1719% ( 3) 00:09:23.964 4557.731 - 4587.520: 0.1875% ( 2) 00:09:23.964 4587.520 - 4617.309: 0.2031% ( 2) 00:09:23.964 4617.309 - 4647.098: 0.2188% ( 2) 00:09:23.964 4647.098 - 4676.887: 0.2422% ( 3) 00:09:23.964 4676.887 - 4706.676: 0.2578% ( 2) 00:09:23.964 4706.676 - 4736.465: 0.2812% ( 3) 00:09:23.964 4736.465 - 4766.255: 0.2969% ( 2) 00:09:23.964 4766.255 - 4796.044: 0.3125% ( 2) 00:09:23.964 4796.044 - 4825.833: 0.3359% ( 3) 00:09:23.964 4825.833 - 4855.622: 0.3516% ( 2) 00:09:23.964 4855.622 - 4885.411: 0.3672% ( 2) 00:09:23.964 4885.411 - 4915.200: 0.3906% ( 3) 00:09:23.964 4915.200 - 4944.989: 0.4062% ( 2) 00:09:23.964 4944.989 - 4974.778: 0.4219% ( 2) 00:09:23.964 4974.778 - 5004.567: 0.4375% ( 2) 00:09:23.964 5004.567 - 5034.356: 0.4609% ( 3) 00:09:23.964 5034.356 - 5064.145: 0.4766% ( 2) 00:09:23.964 5064.145 - 5093.935: 0.4922% ( 2) 00:09:23.964 5093.935 - 5123.724: 0.5000% ( 1) 00:09:23.964 6732.335 - 6762.124: 0.5156% ( 2) 00:09:23.964 6762.124 - 6791.913: 0.5469% ( 4) 00:09:23.965 6791.913 - 6821.702: 0.5625% ( 2) 00:09:23.965 6821.702 - 6851.491: 0.5781% ( 2) 00:09:23.965 6851.491 - 6881.280: 0.6016% ( 3) 00:09:23.965 6881.280 - 6911.069: 0.6172% ( 2) 00:09:23.965 6911.069 - 6940.858: 0.6328% ( 2) 00:09:23.965 6940.858 - 6970.647: 0.6562% ( 3) 00:09:23.965 6970.647 - 7000.436: 0.6641% ( 1) 00:09:23.965 7000.436 - 7030.225: 0.6875% ( 3) 00:09:23.965 7030.225 - 7060.015: 0.7031% ( 2) 00:09:23.965 7060.015 - 7089.804: 0.7188% ( 2) 00:09:23.965 7089.804 - 7119.593: 0.7344% ( 2) 00:09:23.965 7119.593 - 7149.382: 0.7500% ( 2) 00:09:23.965 7149.382 - 7179.171: 0.7578% ( 1) 00:09:23.965 7179.171 - 7208.960: 0.7734% ( 2) 00:09:23.965 7208.960 - 7238.749: 0.7891% ( 2) 00:09:23.965 7238.749 - 7268.538: 0.8047% ( 2) 00:09:23.965 7268.538 - 7298.327: 0.8203% ( 2) 00:09:23.965 7298.327 - 7328.116: 0.8359% ( 2) 00:09:23.965 7328.116 - 7357.905: 0.8594% ( 3) 00:09:23.965 7357.905 - 7387.695: 0.8750% ( 2) 00:09:23.965 7387.695 - 7417.484: 0.8906% ( 2) 00:09:23.965 7417.484 - 7447.273: 0.9141% ( 3) 00:09:23.965 7447.273 - 7477.062: 0.9219% ( 1) 00:09:23.965 7477.062 - 7506.851: 0.9375% ( 2) 00:09:23.965 7506.851 - 7536.640: 0.9531% ( 2) 00:09:23.965 7536.640 - 7566.429: 0.9766% ( 3) 00:09:23.965 7566.429 - 7596.218: 0.9922% ( 2) 00:09:23.965 7596.218 - 7626.007: 1.0000% ( 1) 00:09:23.965 7745.164 - 7804.742: 1.0312% ( 4) 00:09:23.965 7804.742 - 7864.320: 1.1328% ( 13) 00:09:23.965 7864.320 - 7923.898: 1.2734% ( 18) 00:09:23.965 7923.898 - 7983.476: 1.4766% ( 26) 00:09:23.965 7983.476 - 8043.055: 1.7422% ( 34) 00:09:23.965 8043.055 - 8102.633: 2.1641% ( 54) 00:09:23.965 8102.633 - 8162.211: 2.6875% ( 67) 00:09:23.965 8162.211 - 8221.789: 3.3672% ( 87) 00:09:23.965 8221.789 - 8281.367: 4.1328% ( 98) 00:09:23.965 8281.367 - 8340.945: 5.0859% ( 122) 00:09:23.965 8340.945 - 8400.524: 6.1641% ( 138) 00:09:23.965 8400.524 - 8460.102: 7.4219% ( 161) 00:09:23.965 8460.102 - 8519.680: 8.7656% ( 172) 00:09:23.965 8519.680 - 8579.258: 10.2109% ( 185) 00:09:23.965 8579.258 - 8638.836: 11.6797% ( 188) 00:09:23.965 8638.836 - 8698.415: 13.2656% ( 203) 00:09:23.965 8698.415 - 8757.993: 14.8203% ( 199) 00:09:23.965 8757.993 - 8817.571: 16.5859% ( 226) 00:09:23.965 8817.571 - 8877.149: 18.3750% ( 229) 00:09:23.965 8877.149 - 8936.727: 20.2266% ( 237) 00:09:23.965 8936.727 - 8996.305: 22.0391% ( 232) 00:09:23.965 8996.305 - 9055.884: 23.8438% ( 231) 00:09:23.965 9055.884 - 9115.462: 25.6484% ( 231) 00:09:23.965 9115.462 - 9175.040: 27.4609% ( 232) 00:09:23.965 9175.040 - 9234.618: 29.3750% ( 245) 00:09:23.965 9234.618 - 9294.196: 31.1562% ( 228) 00:09:23.965 9294.196 - 9353.775: 32.8828% ( 221) 00:09:23.965 9353.775 - 9413.353: 34.5781% ( 217) 00:09:23.965 9413.353 - 9472.931: 36.4453% ( 239) 00:09:23.965 9472.931 - 9532.509: 38.4062% ( 251) 00:09:23.965 9532.509 - 9592.087: 40.5859% ( 279) 00:09:23.965 9592.087 - 9651.665: 42.8750% ( 293) 00:09:23.965 9651.665 - 9711.244: 45.1484% ( 291) 00:09:23.965 9711.244 - 9770.822: 47.3125% ( 277) 00:09:23.965 9770.822 - 9830.400: 49.5078% ( 281) 00:09:23.965 9830.400 - 9889.978: 51.8750% ( 303) 00:09:23.965 9889.978 - 9949.556: 54.2109% ( 299) 00:09:23.965 9949.556 - 10009.135: 56.6016% ( 306) 00:09:23.965 10009.135 - 10068.713: 58.9375% ( 299) 00:09:23.965 10068.713 - 10128.291: 61.2188% ( 292) 00:09:23.965 10128.291 - 10187.869: 63.5781% ( 302) 00:09:23.965 10187.869 - 10247.447: 65.8203% ( 287) 00:09:23.965 10247.447 - 10307.025: 68.2344% ( 309) 00:09:23.965 10307.025 - 10366.604: 70.5312% ( 294) 00:09:23.965 10366.604 - 10426.182: 72.8438% ( 296) 00:09:23.965 10426.182 - 10485.760: 75.0625% ( 284) 00:09:23.965 10485.760 - 10545.338: 77.2109% ( 275) 00:09:23.965 10545.338 - 10604.916: 79.1797% ( 252) 00:09:23.965 10604.916 - 10664.495: 80.9531% ( 227) 00:09:23.965 10664.495 - 10724.073: 82.6328% ( 215) 00:09:23.965 10724.073 - 10783.651: 84.2188% ( 203) 00:09:23.965 10783.651 - 10843.229: 85.7500% ( 196) 00:09:23.965 10843.229 - 10902.807: 87.1562% ( 180) 00:09:23.965 10902.807 - 10962.385: 88.5078% ( 173) 00:09:23.965 10962.385 - 11021.964: 89.5859% ( 138) 00:09:23.965 11021.964 - 11081.542: 90.5859% ( 128) 00:09:23.965 11081.542 - 11141.120: 91.5156% ( 119) 00:09:23.965 11141.120 - 11200.698: 92.2891% ( 99) 00:09:23.965 11200.698 - 11260.276: 93.0234% ( 94) 00:09:23.965 11260.276 - 11319.855: 93.6484% ( 80) 00:09:23.965 11319.855 - 11379.433: 94.1797% ( 68) 00:09:23.965 11379.433 - 11439.011: 94.5781% ( 51) 00:09:23.965 11439.011 - 11498.589: 94.8906% ( 40) 00:09:23.965 11498.589 - 11558.167: 95.1250% ( 30) 00:09:23.965 11558.167 - 11617.745: 95.3438% ( 28) 00:09:23.965 11617.745 - 11677.324: 95.4922% ( 19) 00:09:23.965 11677.324 - 11736.902: 95.6562% ( 21) 00:09:23.965 11736.902 - 11796.480: 95.8281% ( 22) 00:09:23.965 11796.480 - 11856.058: 96.0000% ( 22) 00:09:23.965 11856.058 - 11915.636: 96.1797% ( 23) 00:09:23.965 11915.636 - 11975.215: 96.3281% ( 19) 00:09:23.965 11975.215 - 12034.793: 96.4766% ( 19) 00:09:23.965 12034.793 - 12094.371: 96.5703% ( 12) 00:09:23.965 12094.371 - 12153.949: 96.6875% ( 15) 00:09:23.965 12153.949 - 12213.527: 96.7656% ( 10) 00:09:23.965 12213.527 - 12273.105: 96.8516% ( 11) 00:09:23.965 12273.105 - 12332.684: 96.9141% ( 8) 00:09:23.965 12332.684 - 12392.262: 96.9531% ( 5) 00:09:23.965 12392.262 - 12451.840: 96.9922% ( 5) 00:09:23.965 12451.840 - 12511.418: 97.0391% ( 6) 00:09:23.965 12511.418 - 12570.996: 97.1016% ( 8) 00:09:23.965 12570.996 - 12630.575: 97.1719% ( 9) 00:09:23.965 12630.575 - 12690.153: 97.2344% ( 8) 00:09:23.965 12690.153 - 12749.731: 97.3203% ( 11) 00:09:23.965 12749.731 - 12809.309: 97.4141% ( 12) 00:09:23.965 12809.309 - 12868.887: 97.5234% ( 14) 00:09:23.965 12868.887 - 12928.465: 97.6094% ( 11) 00:09:23.965 12928.465 - 12988.044: 97.7422% ( 17) 00:09:23.965 12988.044 - 13047.622: 97.8203% ( 10) 00:09:23.965 13047.622 - 13107.200: 97.9375% ( 15) 00:09:23.966 13107.200 - 13166.778: 98.0469% ( 14) 00:09:23.966 13166.778 - 13226.356: 98.1484% ( 13) 00:09:23.966 13226.356 - 13285.935: 98.2500% ( 13) 00:09:23.966 13285.935 - 13345.513: 98.3203% ( 9) 00:09:23.966 13345.513 - 13405.091: 98.3906% ( 9) 00:09:23.966 13405.091 - 13464.669: 98.4609% ( 9) 00:09:23.966 13464.669 - 13524.247: 98.5469% ( 11) 00:09:23.966 13524.247 - 13583.825: 98.6094% ( 8) 00:09:23.966 13583.825 - 13643.404: 98.6797% ( 9) 00:09:23.966 13643.404 - 13702.982: 98.7188% ( 5) 00:09:23.966 13702.982 - 13762.560: 98.7734% ( 7) 00:09:23.966 13762.560 - 13822.138: 98.8125% ( 5) 00:09:23.966 13822.138 - 13881.716: 98.8438% ( 4) 00:09:23.966 13881.716 - 13941.295: 98.8594% ( 2) 00:09:23.966 13941.295 - 14000.873: 98.8750% ( 2) 00:09:23.966 14000.873 - 14060.451: 98.8984% ( 3) 00:09:23.966 14060.451 - 14120.029: 98.9141% ( 2) 00:09:23.966 14120.029 - 14179.607: 98.9375% ( 3) 00:09:23.966 14179.607 - 14239.185: 98.9531% ( 2) 00:09:23.966 14239.185 - 14298.764: 98.9766% ( 3) 00:09:23.966 14298.764 - 14358.342: 98.9922% ( 2) 00:09:23.966 14358.342 - 14417.920: 99.0000% ( 1) 00:09:23.966 20256.582 - 20375.738: 99.0234% ( 3) 00:09:23.966 20375.738 - 20494.895: 99.0547% ( 4) 00:09:23.966 20494.895 - 20614.051: 99.0938% ( 5) 00:09:23.966 20614.051 - 20733.207: 99.1328% ( 5) 00:09:23.966 20733.207 - 20852.364: 99.1719% ( 5) 00:09:23.966 20852.364 - 20971.520: 99.2109% ( 5) 00:09:23.966 20971.520 - 21090.676: 99.2422% ( 4) 00:09:23.966 21090.676 - 21209.833: 99.2812% ( 5) 00:09:23.966 21209.833 - 21328.989: 99.3203% ( 5) 00:09:23.966 21328.989 - 21448.145: 99.3594% ( 5) 00:09:23.966 21448.145 - 21567.302: 99.3984% ( 5) 00:09:23.966 21567.302 - 21686.458: 99.4375% ( 5) 00:09:23.966 21686.458 - 21805.615: 99.4766% ( 5) 00:09:23.966 21805.615 - 21924.771: 99.5000% ( 3) 00:09:23.966 26452.713 - 26571.869: 99.5156% ( 2) 00:09:23.966 26571.869 - 26691.025: 99.5547% ( 5) 00:09:23.966 26691.025 - 26810.182: 99.6016% ( 6) 00:09:23.966 26810.182 - 26929.338: 99.6250% ( 3) 00:09:23.966 26929.338 - 27048.495: 99.6719% ( 6) 00:09:23.966 27048.495 - 27167.651: 99.7109% ( 5) 00:09:23.966 27167.651 - 27286.807: 99.7500% ( 5) 00:09:23.966 27286.807 - 27405.964: 99.7891% ( 5) 00:09:23.966 27405.964 - 27525.120: 99.8281% ( 5) 00:09:23.966 27525.120 - 27644.276: 99.8750% ( 6) 00:09:23.966 27644.276 - 27763.433: 99.9219% ( 6) 00:09:23.966 27763.433 - 27882.589: 99.9688% ( 6) 00:09:23.966 27882.589 - 28001.745: 100.0000% ( 4) 00:09:23.966 00:09:23.966 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:23.966 ============================================================================== 00:09:23.966 Range in us Cumulative IO count 00:09:23.966 3872.582 - 3902.371: 0.0078% ( 1) 00:09:23.966 3902.371 - 3932.160: 0.0156% ( 1) 00:09:23.966 3932.160 - 3961.949: 0.0391% ( 3) 00:09:23.966 3961.949 - 3991.738: 0.0547% ( 2) 00:09:23.966 3991.738 - 4021.527: 0.0781% ( 3) 00:09:23.966 4021.527 - 4051.316: 0.0938% ( 2) 00:09:23.966 4051.316 - 4081.105: 0.1094% ( 2) 00:09:23.966 4081.105 - 4110.895: 0.1250% ( 2) 00:09:23.966 4110.895 - 4140.684: 0.1484% ( 3) 00:09:23.966 4140.684 - 4170.473: 0.1641% ( 2) 00:09:23.966 4170.473 - 4200.262: 0.1797% ( 2) 00:09:23.966 4200.262 - 4230.051: 0.1953% ( 2) 00:09:23.966 4230.051 - 4259.840: 0.2188% ( 3) 00:09:23.966 4259.840 - 4289.629: 0.2344% ( 2) 00:09:23.966 4289.629 - 4319.418: 0.2500% ( 2) 00:09:23.966 4319.418 - 4349.207: 0.2656% ( 2) 00:09:23.966 4349.207 - 4378.996: 0.2734% ( 1) 00:09:23.966 4378.996 - 4408.785: 0.2891% ( 2) 00:09:23.966 4408.785 - 4438.575: 0.3047% ( 2) 00:09:23.966 4438.575 - 4468.364: 0.3281% ( 3) 00:09:23.966 4468.364 - 4498.153: 0.3438% ( 2) 00:09:23.966 4498.153 - 4527.942: 0.3672% ( 3) 00:09:23.966 4527.942 - 4557.731: 0.3828% ( 2) 00:09:23.966 4557.731 - 4587.520: 0.3984% ( 2) 00:09:23.966 4587.520 - 4617.309: 0.4141% ( 2) 00:09:23.966 4617.309 - 4647.098: 0.4375% ( 3) 00:09:23.966 4647.098 - 4676.887: 0.4531% ( 2) 00:09:23.966 4676.887 - 4706.676: 0.4688% ( 2) 00:09:23.966 4706.676 - 4736.465: 0.4922% ( 3) 00:09:23.966 4736.465 - 4766.255: 0.5000% ( 1) 00:09:23.966 6434.444 - 6464.233: 0.5234% ( 3) 00:09:23.966 6464.233 - 6494.022: 0.5391% ( 2) 00:09:23.966 6494.022 - 6523.811: 0.5625% ( 3) 00:09:23.966 6523.811 - 6553.600: 0.5781% ( 2) 00:09:23.966 6553.600 - 6583.389: 0.6016% ( 3) 00:09:23.966 6583.389 - 6613.178: 0.6172% ( 2) 00:09:23.966 6613.178 - 6642.967: 0.6328% ( 2) 00:09:23.966 6642.967 - 6672.756: 0.6562% ( 3) 00:09:23.966 6672.756 - 6702.545: 0.6719% ( 2) 00:09:23.966 6702.545 - 6732.335: 0.6875% ( 2) 00:09:23.966 6732.335 - 6762.124: 0.7109% ( 3) 00:09:23.966 6762.124 - 6791.913: 0.7266% ( 2) 00:09:23.966 6791.913 - 6821.702: 0.7422% ( 2) 00:09:23.966 6821.702 - 6851.491: 0.7656% ( 3) 00:09:23.966 6851.491 - 6881.280: 0.7734% ( 1) 00:09:23.966 6881.280 - 6911.069: 0.7891% ( 2) 00:09:23.966 6911.069 - 6940.858: 0.8047% ( 2) 00:09:23.966 6940.858 - 6970.647: 0.8281% ( 3) 00:09:23.966 6970.647 - 7000.436: 0.8438% ( 2) 00:09:23.966 7000.436 - 7030.225: 0.8672% ( 3) 00:09:23.966 7030.225 - 7060.015: 0.8828% ( 2) 00:09:23.966 7060.015 - 7089.804: 0.8984% ( 2) 00:09:23.966 7089.804 - 7119.593: 0.9219% ( 3) 00:09:23.966 7119.593 - 7149.382: 0.9375% ( 2) 00:09:23.966 7149.382 - 7179.171: 0.9609% ( 3) 00:09:23.966 7179.171 - 7208.960: 0.9688% ( 1) 00:09:23.966 7208.960 - 7238.749: 0.9922% ( 3) 00:09:23.966 7238.749 - 7268.538: 1.0000% ( 1) 00:09:23.966 7745.164 - 7804.742: 1.0547% ( 7) 00:09:23.966 7804.742 - 7864.320: 1.1562% ( 13) 00:09:23.966 7864.320 - 7923.898: 1.2969% ( 18) 00:09:23.966 7923.898 - 7983.476: 1.5078% ( 27) 00:09:23.966 7983.476 - 8043.055: 1.7500% ( 31) 00:09:23.966 8043.055 - 8102.633: 2.0938% ( 44) 00:09:23.966 8102.633 - 8162.211: 2.6016% ( 65) 00:09:23.966 8162.211 - 8221.789: 3.2656% ( 85) 00:09:23.966 8221.789 - 8281.367: 4.1328% ( 111) 00:09:23.966 8281.367 - 8340.945: 5.1484% ( 130) 00:09:23.966 8340.945 - 8400.524: 6.3125% ( 149) 00:09:23.966 8400.524 - 8460.102: 7.4609% ( 147) 00:09:23.966 8460.102 - 8519.680: 8.7422% ( 164) 00:09:23.966 8519.680 - 8579.258: 10.0859% ( 172) 00:09:23.966 8579.258 - 8638.836: 11.6328% ( 198) 00:09:23.966 8638.836 - 8698.415: 13.2031% ( 201) 00:09:23.966 8698.415 - 8757.993: 14.9062% ( 218) 00:09:23.966 8757.993 - 8817.571: 16.7266% ( 233) 00:09:23.966 8817.571 - 8877.149: 18.5703% ( 236) 00:09:23.966 8877.149 - 8936.727: 20.4844% ( 245) 00:09:23.966 8936.727 - 8996.305: 22.4297% ( 249) 00:09:23.966 8996.305 - 9055.884: 24.2812% ( 237) 00:09:23.966 9055.884 - 9115.462: 25.9922% ( 219) 00:09:23.966 9115.462 - 9175.040: 27.7188% ( 221) 00:09:23.966 9175.040 - 9234.618: 29.3906% ( 214) 00:09:23.966 9234.618 - 9294.196: 31.0391% ( 211) 00:09:23.966 9294.196 - 9353.775: 32.7812% ( 223) 00:09:23.966 9353.775 - 9413.353: 34.5859% ( 231) 00:09:23.966 9413.353 - 9472.931: 36.5703% ( 254) 00:09:23.966 9472.931 - 9532.509: 38.5859% ( 258) 00:09:23.966 9532.509 - 9592.087: 40.6953% ( 270) 00:09:23.966 9592.087 - 9651.665: 42.9062% ( 283) 00:09:23.966 9651.665 - 9711.244: 45.2031% ( 294) 00:09:23.966 9711.244 - 9770.822: 47.4062% ( 282) 00:09:23.966 9770.822 - 9830.400: 49.6328% ( 285) 00:09:23.966 9830.400 - 9889.978: 51.9766% ( 300) 00:09:23.966 9889.978 - 9949.556: 54.3672% ( 306) 00:09:23.966 9949.556 - 10009.135: 56.6484% ( 292) 00:09:23.966 10009.135 - 10068.713: 59.0781% ( 311) 00:09:23.966 10068.713 - 10128.291: 61.4062% ( 298) 00:09:23.966 10128.291 - 10187.869: 63.8203% ( 309) 00:09:23.966 10187.869 - 10247.447: 66.2656% ( 313) 00:09:23.966 10247.447 - 10307.025: 68.5625% ( 294) 00:09:23.966 10307.025 - 10366.604: 70.8359% ( 291) 00:09:23.966 10366.604 - 10426.182: 72.9609% ( 272) 00:09:23.966 10426.182 - 10485.760: 75.0000% ( 261) 00:09:23.966 10485.760 - 10545.338: 77.0703% ( 265) 00:09:23.966 10545.338 - 10604.916: 79.0000% ( 247) 00:09:23.966 10604.916 - 10664.495: 80.8984% ( 243) 00:09:23.966 10664.495 - 10724.073: 82.6250% ( 221) 00:09:23.966 10724.073 - 10783.651: 84.2656% ( 210) 00:09:23.966 10783.651 - 10843.229: 85.7266% ( 187) 00:09:23.966 10843.229 - 10902.807: 87.0234% ( 166) 00:09:23.966 10902.807 - 10962.385: 88.2109% ( 152) 00:09:23.966 10962.385 - 11021.964: 89.3516% ( 146) 00:09:23.966 11021.964 - 11081.542: 90.3438% ( 127) 00:09:23.966 11081.542 - 11141.120: 91.2344% ( 114) 00:09:23.966 11141.120 - 11200.698: 92.0000% ( 98) 00:09:23.966 11200.698 - 11260.276: 92.7344% ( 94) 00:09:23.966 11260.276 - 11319.855: 93.4062% ( 86) 00:09:23.966 11319.855 - 11379.433: 93.9688% ( 72) 00:09:23.966 11379.433 - 11439.011: 94.4297% ( 59) 00:09:23.966 11439.011 - 11498.589: 94.8516% ( 54) 00:09:23.966 11498.589 - 11558.167: 95.1797% ( 42) 00:09:23.966 11558.167 - 11617.745: 95.4219% ( 31) 00:09:23.966 11617.745 - 11677.324: 95.6406% ( 28) 00:09:23.966 11677.324 - 11736.902: 95.7969% ( 20) 00:09:23.966 11736.902 - 11796.480: 95.9531% ( 20) 00:09:23.966 11796.480 - 11856.058: 96.1094% ( 20) 00:09:23.966 11856.058 - 11915.636: 96.2500% ( 18) 00:09:23.966 11915.636 - 11975.215: 96.3828% ( 17) 00:09:23.966 11975.215 - 12034.793: 96.5078% ( 16) 00:09:23.966 12034.793 - 12094.371: 96.6094% ( 13) 00:09:23.966 12094.371 - 12153.949: 96.7266% ( 15) 00:09:23.966 12153.949 - 12213.527: 96.8125% ( 11) 00:09:23.966 12213.527 - 12273.105: 96.8906% ( 10) 00:09:23.966 12273.105 - 12332.684: 96.9297% ( 5) 00:09:23.967 12332.684 - 12392.262: 96.9766% ( 6) 00:09:23.967 12392.262 - 12451.840: 97.0078% ( 4) 00:09:23.967 12451.840 - 12511.418: 97.0469% ( 5) 00:09:23.967 12511.418 - 12570.996: 97.0859% ( 5) 00:09:23.967 12570.996 - 12630.575: 97.1328% ( 6) 00:09:23.967 12630.575 - 12690.153: 97.2266% ( 12) 00:09:23.967 12690.153 - 12749.731: 97.3125% ( 11) 00:09:23.967 12749.731 - 12809.309: 97.4141% ( 13) 00:09:23.967 12809.309 - 12868.887: 97.5078% ( 12) 00:09:23.967 12868.887 - 12928.465: 97.6094% ( 13) 00:09:23.967 12928.465 - 12988.044: 97.7188% ( 14) 00:09:23.967 12988.044 - 13047.622: 97.8125% ( 12) 00:09:23.967 13047.622 - 13107.200: 97.9219% ( 14) 00:09:23.967 13107.200 - 13166.778: 98.0156% ( 12) 00:09:23.967 13166.778 - 13226.356: 98.0938% ( 10) 00:09:23.967 13226.356 - 13285.935: 98.1953% ( 13) 00:09:23.967 13285.935 - 13345.513: 98.2891% ( 12) 00:09:23.967 13345.513 - 13405.091: 98.3750% ( 11) 00:09:23.967 13405.091 - 13464.669: 98.4688% ( 12) 00:09:23.967 13464.669 - 13524.247: 98.5547% ( 11) 00:09:23.967 13524.247 - 13583.825: 98.6172% ( 8) 00:09:23.967 13583.825 - 13643.404: 98.6875% ( 9) 00:09:23.967 13643.404 - 13702.982: 98.7578% ( 9) 00:09:23.967 13702.982 - 13762.560: 98.7969% ( 5) 00:09:23.967 13762.560 - 13822.138: 98.8203% ( 3) 00:09:23.967 13822.138 - 13881.716: 98.8359% ( 2) 00:09:23.967 13881.716 - 13941.295: 98.8594% ( 3) 00:09:23.967 13941.295 - 14000.873: 98.8750% ( 2) 00:09:23.967 14000.873 - 14060.451: 98.8984% ( 3) 00:09:23.967 14060.451 - 14120.029: 98.9141% ( 2) 00:09:23.967 14120.029 - 14179.607: 98.9375% ( 3) 00:09:23.967 14179.607 - 14239.185: 98.9531% ( 2) 00:09:23.967 14239.185 - 14298.764: 98.9766% ( 3) 00:09:23.967 14298.764 - 14358.342: 98.9922% ( 2) 00:09:23.967 14358.342 - 14417.920: 99.0000% ( 1) 00:09:23.967 19422.487 - 19541.644: 99.0234% ( 3) 00:09:23.967 19541.644 - 19660.800: 99.0625% ( 5) 00:09:23.967 19660.800 - 19779.956: 99.0938% ( 4) 00:09:23.967 19779.956 - 19899.113: 99.1328% ( 5) 00:09:23.967 19899.113 - 20018.269: 99.1719% ( 5) 00:09:23.967 20018.269 - 20137.425: 99.2109% ( 5) 00:09:23.967 20137.425 - 20256.582: 99.2500% ( 5) 00:09:23.967 20256.582 - 20375.738: 99.2812% ( 4) 00:09:23.967 20375.738 - 20494.895: 99.3203% ( 5) 00:09:23.967 20494.895 - 20614.051: 99.3594% ( 5) 00:09:23.967 20614.051 - 20733.207: 99.3984% ( 5) 00:09:23.967 20733.207 - 20852.364: 99.4297% ( 4) 00:09:23.967 20852.364 - 20971.520: 99.4766% ( 6) 00:09:23.967 20971.520 - 21090.676: 99.5000% ( 3) 00:09:23.967 25618.618 - 25737.775: 99.5234% ( 3) 00:09:23.967 25737.775 - 25856.931: 99.5703% ( 6) 00:09:23.967 25856.931 - 25976.087: 99.6094% ( 5) 00:09:23.967 25976.087 - 26095.244: 99.6562% ( 6) 00:09:23.967 26095.244 - 26214.400: 99.7031% ( 6) 00:09:23.967 26214.400 - 26333.556: 99.7422% ( 5) 00:09:23.967 26333.556 - 26452.713: 99.7812% ( 5) 00:09:23.967 26452.713 - 26571.869: 99.8281% ( 6) 00:09:23.967 26571.869 - 26691.025: 99.8672% ( 5) 00:09:23.967 26691.025 - 26810.182: 99.9141% ( 6) 00:09:23.967 26810.182 - 26929.338: 99.9531% ( 5) 00:09:23.967 26929.338 - 27048.495: 99.9922% ( 5) 00:09:23.967 27048.495 - 27167.651: 100.0000% ( 1) 00:09:23.967 00:09:23.967 02:56:09 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:25.345 Initializing NVMe Controllers 00:09:25.345 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:25.345 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:25.345 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:25.345 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:25.345 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:25.345 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:25.345 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:25.345 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:25.345 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:25.345 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:25.345 Initialization complete. Launching workers. 00:09:25.345 ======================================================== 00:09:25.345 Latency(us) 00:09:25.345 Device Information : IOPS MiB/s Average min max 00:09:25.345 PCIE (0000:00:10.0) NSID 1 from core 0: 12479.01 146.24 10262.94 7700.64 29493.08 00:09:25.345 PCIE (0000:00:11.0) NSID 1 from core 0: 12479.01 146.24 10251.61 7463.93 28410.19 00:09:25.345 PCIE (0000:00:13.0) NSID 1 from core 0: 12479.01 146.24 10239.02 6582.42 28395.47 00:09:25.345 PCIE (0000:00:12.0) NSID 1 from core 0: 12479.01 146.24 10224.60 5931.82 27481.54 00:09:25.345 PCIE (0000:00:12.0) NSID 2 from core 0: 12479.01 146.24 10210.88 5358.27 26486.70 00:09:25.345 PCIE (0000:00:12.0) NSID 3 from core 0: 12479.01 146.24 10197.82 4889.13 25641.35 00:09:25.345 ======================================================== 00:09:25.345 Total : 74874.08 877.43 10231.15 4889.13 29493.08 00:09:25.345 00:09:25.345 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:25.345 ================================================================================= 00:09:25.345 1.00000% : 8400.524us 00:09:25.345 10.00000% : 8996.305us 00:09:25.345 25.00000% : 9472.931us 00:09:25.345 50.00000% : 10009.135us 00:09:25.345 75.00000% : 10664.495us 00:09:25.345 90.00000% : 11617.745us 00:09:25.345 95.00000% : 12392.262us 00:09:25.345 98.00000% : 13405.091us 00:09:25.345 99.00000% : 20018.269us 00:09:25.345 99.50000% : 27882.589us 00:09:25.345 99.90000% : 29193.309us 00:09:25.345 99.99000% : 29550.778us 00:09:25.345 99.99900% : 29550.778us 00:09:25.345 99.99990% : 29550.778us 00:09:25.345 99.99999% : 29550.778us 00:09:25.345 00:09:25.345 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:25.345 ================================================================================= 00:09:25.345 1.00000% : 8519.680us 00:09:25.345 10.00000% : 8996.305us 00:09:25.345 25.00000% : 9472.931us 00:09:25.345 50.00000% : 10009.135us 00:09:25.345 75.00000% : 10545.338us 00:09:25.345 90.00000% : 11498.589us 00:09:25.345 95.00000% : 12273.105us 00:09:25.345 98.00000% : 13524.247us 00:09:25.345 99.00000% : 20494.895us 00:09:25.345 99.50000% : 27048.495us 00:09:25.345 99.90000% : 28240.058us 00:09:25.345 99.99000% : 28478.371us 00:09:25.345 99.99900% : 28478.371us 00:09:25.345 99.99990% : 28478.371us 00:09:25.345 99.99999% : 28478.371us 00:09:25.345 00:09:25.345 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:25.345 ================================================================================= 00:09:25.345 1.00000% : 8460.102us 00:09:25.345 10.00000% : 9055.884us 00:09:25.345 25.00000% : 9532.509us 00:09:25.345 50.00000% : 10009.135us 00:09:25.345 75.00000% : 10545.338us 00:09:25.345 90.00000% : 11498.589us 00:09:25.345 95.00000% : 12332.684us 00:09:25.345 98.00000% : 13405.091us 00:09:25.345 99.00000% : 20733.207us 00:09:25.345 99.50000% : 26452.713us 00:09:25.345 99.90000% : 28120.902us 00:09:25.345 99.99000% : 28478.371us 00:09:25.345 99.99900% : 28478.371us 00:09:25.345 99.99990% : 28478.371us 00:09:25.345 99.99999% : 28478.371us 00:09:25.345 00:09:25.345 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:25.345 ================================================================================= 00:09:25.345 1.00000% : 8460.102us 00:09:25.345 10.00000% : 9055.884us 00:09:25.345 25.00000% : 9532.509us 00:09:25.345 50.00000% : 10009.135us 00:09:25.345 75.00000% : 10604.916us 00:09:25.345 90.00000% : 11439.011us 00:09:25.345 95.00000% : 12392.262us 00:09:25.345 98.00000% : 13166.778us 00:09:25.345 99.00000% : 19303.331us 00:09:25.345 99.50000% : 25976.087us 00:09:25.345 99.90000% : 27167.651us 00:09:25.345 99.99000% : 27525.120us 00:09:25.345 99.99900% : 27525.120us 00:09:25.345 99.99990% : 27525.120us 00:09:25.345 99.99999% : 27525.120us 00:09:25.345 00:09:25.345 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:25.345 ================================================================================= 00:09:25.345 1.00000% : 8519.680us 00:09:25.345 10.00000% : 9055.884us 00:09:25.345 25.00000% : 9472.931us 00:09:25.345 50.00000% : 10009.135us 00:09:25.345 75.00000% : 10545.338us 00:09:25.345 90.00000% : 11498.589us 00:09:25.345 95.00000% : 12273.105us 00:09:25.345 98.00000% : 13166.778us 00:09:25.345 99.00000% : 18826.705us 00:09:25.345 99.50000% : 24307.898us 00:09:25.345 99.90000% : 26214.400us 00:09:25.345 99.99000% : 26571.869us 00:09:25.346 99.99900% : 26571.869us 00:09:25.346 99.99990% : 26571.869us 00:09:25.346 99.99999% : 26571.869us 00:09:25.346 00:09:25.346 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:25.346 ================================================================================= 00:09:25.346 1.00000% : 8519.680us 00:09:25.346 10.00000% : 8996.305us 00:09:25.346 25.00000% : 9472.931us 00:09:25.346 50.00000% : 10009.135us 00:09:25.346 75.00000% : 10545.338us 00:09:25.346 90.00000% : 11498.589us 00:09:25.346 95.00000% : 12273.105us 00:09:25.346 98.00000% : 13226.356us 00:09:25.346 99.00000% : 17992.611us 00:09:25.346 99.50000% : 24427.055us 00:09:25.346 99.90000% : 25499.462us 00:09:25.346 99.99000% : 25618.618us 00:09:25.346 99.99900% : 25737.775us 00:09:25.346 99.99990% : 25737.775us 00:09:25.346 99.99999% : 25737.775us 00:09:25.346 00:09:25.346 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:25.346 ============================================================================== 00:09:25.346 Range in us Cumulative IO count 00:09:25.346 7685.585 - 7745.164: 0.0080% ( 1) 00:09:25.346 7745.164 - 7804.742: 0.0321% ( 3) 00:09:25.346 7804.742 - 7864.320: 0.0801% ( 6) 00:09:25.346 7864.320 - 7923.898: 0.1122% ( 4) 00:09:25.346 7923.898 - 7983.476: 0.1442% ( 4) 00:09:25.346 7983.476 - 8043.055: 0.1683% ( 3) 00:09:25.346 8043.055 - 8102.633: 0.1923% ( 3) 00:09:25.346 8102.633 - 8162.211: 0.2163% ( 3) 00:09:25.346 8162.211 - 8221.789: 0.4247% ( 26) 00:09:25.346 8221.789 - 8281.367: 0.6250% ( 25) 00:09:25.346 8281.367 - 8340.945: 0.9215% ( 37) 00:09:25.346 8340.945 - 8400.524: 1.4022% ( 60) 00:09:25.346 8400.524 - 8460.102: 2.0112% ( 76) 00:09:25.346 8460.102 - 8519.680: 2.8446% ( 104) 00:09:25.346 8519.680 - 8579.258: 3.9183% ( 134) 00:09:25.346 8579.258 - 8638.836: 4.8077% ( 111) 00:09:25.346 8638.836 - 8698.415: 5.8494% ( 130) 00:09:25.346 8698.415 - 8757.993: 6.7548% ( 113) 00:09:25.346 8757.993 - 8817.571: 7.7244% ( 121) 00:09:25.346 8817.571 - 8877.149: 8.7179% ( 124) 00:09:25.346 8877.149 - 8936.727: 9.7676% ( 131) 00:09:25.346 8936.727 - 8996.305: 11.3622% ( 199) 00:09:25.346 8996.305 - 9055.884: 13.0369% ( 209) 00:09:25.346 9055.884 - 9115.462: 14.6635% ( 203) 00:09:25.346 9115.462 - 9175.040: 16.1619% ( 187) 00:09:25.346 9175.040 - 9234.618: 17.8365% ( 209) 00:09:25.346 9234.618 - 9294.196: 19.5593% ( 215) 00:09:25.346 9294.196 - 9353.775: 21.1619% ( 200) 00:09:25.346 9353.775 - 9413.353: 23.3093% ( 268) 00:09:25.346 9413.353 - 9472.931: 25.8093% ( 312) 00:09:25.346 9472.931 - 9532.509: 28.0849% ( 284) 00:09:25.346 9532.509 - 9592.087: 31.0817% ( 374) 00:09:25.346 9592.087 - 9651.665: 33.7821% ( 337) 00:09:25.346 9651.665 - 9711.244: 36.8269% ( 380) 00:09:25.346 9711.244 - 9770.822: 39.6554% ( 353) 00:09:25.346 9770.822 - 9830.400: 42.4920% ( 354) 00:09:25.346 9830.400 - 9889.978: 45.2885% ( 349) 00:09:25.346 9889.978 - 9949.556: 47.7965% ( 313) 00:09:25.346 9949.556 - 10009.135: 50.4407% ( 330) 00:09:25.346 10009.135 - 10068.713: 53.0689% ( 328) 00:09:25.346 10068.713 - 10128.291: 55.6731% ( 325) 00:09:25.346 10128.291 - 10187.869: 58.2853% ( 326) 00:09:25.346 10187.869 - 10247.447: 60.8173% ( 316) 00:09:25.346 10247.447 - 10307.025: 63.2853% ( 308) 00:09:25.346 10307.025 - 10366.604: 65.5369% ( 281) 00:09:25.346 10366.604 - 10426.182: 67.7885% ( 281) 00:09:25.346 10426.182 - 10485.760: 69.9679% ( 272) 00:09:25.346 10485.760 - 10545.338: 72.2516% ( 285) 00:09:25.346 10545.338 - 10604.916: 74.4311% ( 272) 00:09:25.346 10604.916 - 10664.495: 76.5545% ( 265) 00:09:25.346 10664.495 - 10724.073: 78.3013% ( 218) 00:09:25.346 10724.073 - 10783.651: 79.8397% ( 192) 00:09:25.346 10783.651 - 10843.229: 81.3221% ( 185) 00:09:25.346 10843.229 - 10902.807: 82.5401% ( 152) 00:09:25.346 10902.807 - 10962.385: 83.7260% ( 148) 00:09:25.346 10962.385 - 11021.964: 84.6394% ( 114) 00:09:25.346 11021.964 - 11081.542: 85.4728% ( 104) 00:09:25.346 11081.542 - 11141.120: 86.1619% ( 86) 00:09:25.346 11141.120 - 11200.698: 86.8029% ( 80) 00:09:25.346 11200.698 - 11260.276: 87.4519% ( 81) 00:09:25.346 11260.276 - 11319.855: 87.9968% ( 68) 00:09:25.346 11319.855 - 11379.433: 88.6218% ( 78) 00:09:25.346 11379.433 - 11439.011: 89.1747% ( 69) 00:09:25.346 11439.011 - 11498.589: 89.5994% ( 53) 00:09:25.346 11498.589 - 11558.167: 89.9599% ( 45) 00:09:25.346 11558.167 - 11617.745: 90.3606% ( 50) 00:09:25.346 11617.745 - 11677.324: 91.0737% ( 89) 00:09:25.346 11677.324 - 11736.902: 91.4744% ( 50) 00:09:25.346 11736.902 - 11796.480: 91.7708% ( 37) 00:09:25.346 11796.480 - 11856.058: 92.0513% ( 35) 00:09:25.346 11856.058 - 11915.636: 92.3558% ( 38) 00:09:25.346 11915.636 - 11975.215: 92.7083% ( 44) 00:09:25.346 11975.215 - 12034.793: 93.0048% ( 37) 00:09:25.346 12034.793 - 12094.371: 93.3814% ( 47) 00:09:25.346 12094.371 - 12153.949: 93.8542% ( 59) 00:09:25.346 12153.949 - 12213.527: 94.1827% ( 41) 00:09:25.346 12213.527 - 12273.105: 94.5513% ( 46) 00:09:25.346 12273.105 - 12332.684: 94.8878% ( 42) 00:09:25.346 12332.684 - 12392.262: 95.2163% ( 41) 00:09:25.346 12392.262 - 12451.840: 95.4728% ( 32) 00:09:25.346 12451.840 - 12511.418: 95.7452% ( 34) 00:09:25.346 12511.418 - 12570.996: 95.9696% ( 28) 00:09:25.346 12570.996 - 12630.575: 96.1939% ( 28) 00:09:25.346 12630.575 - 12690.153: 96.3542% ( 20) 00:09:25.346 12690.153 - 12749.731: 96.5946% ( 30) 00:09:25.346 12749.731 - 12809.309: 96.8910% ( 37) 00:09:25.346 12809.309 - 12868.887: 97.0353% ( 18) 00:09:25.346 12868.887 - 12928.465: 97.2035% ( 21) 00:09:25.346 12928.465 - 12988.044: 97.3237% ( 15) 00:09:25.346 12988.044 - 13047.622: 97.4279% ( 13) 00:09:25.346 13047.622 - 13107.200: 97.5401% ( 14) 00:09:25.346 13107.200 - 13166.778: 97.6202% ( 10) 00:09:25.346 13166.778 - 13226.356: 97.7244% ( 13) 00:09:25.346 13226.356 - 13285.935: 97.8365% ( 14) 00:09:25.346 13285.935 - 13345.513: 97.9087% ( 9) 00:09:25.346 13345.513 - 13405.091: 98.0048% ( 12) 00:09:25.346 13405.091 - 13464.669: 98.0849% ( 10) 00:09:25.346 13464.669 - 13524.247: 98.1811% ( 12) 00:09:25.346 13524.247 - 13583.825: 98.2452% ( 8) 00:09:25.346 13583.825 - 13643.404: 98.3173% ( 9) 00:09:25.346 13643.404 - 13702.982: 98.3654% ( 6) 00:09:25.346 13702.982 - 13762.560: 98.4295% ( 8) 00:09:25.346 13762.560 - 13822.138: 98.4936% ( 8) 00:09:25.346 13822.138 - 13881.716: 98.5577% ( 8) 00:09:25.346 13881.716 - 13941.295: 98.6058% ( 6) 00:09:25.346 13941.295 - 14000.873: 98.6779% ( 9) 00:09:25.346 14000.873 - 14060.451: 98.7420% ( 8) 00:09:25.346 14060.451 - 14120.029: 98.7821% ( 5) 00:09:25.346 14120.029 - 14179.607: 98.8301% ( 6) 00:09:25.346 14179.607 - 14239.185: 98.8542% ( 3) 00:09:25.346 14239.185 - 14298.764: 98.8702% ( 2) 00:09:25.346 14298.764 - 14358.342: 98.8942% ( 3) 00:09:25.346 14358.342 - 14417.920: 98.9263% ( 4) 00:09:25.346 14417.920 - 14477.498: 98.9343% ( 1) 00:09:25.346 14477.498 - 14537.076: 98.9583% ( 3) 00:09:25.346 14537.076 - 14596.655: 98.9744% ( 2) 00:09:25.346 19899.113 - 20018.269: 99.0545% ( 10) 00:09:25.346 20018.269 - 20137.425: 99.0705% ( 2) 00:09:25.346 20137.425 - 20256.582: 99.1186% ( 6) 00:09:25.346 20256.582 - 20375.738: 99.1346% ( 2) 00:09:25.346 20375.738 - 20494.895: 99.1667% ( 4) 00:09:25.346 20494.895 - 20614.051: 99.1907% ( 3) 00:09:25.346 20614.051 - 20733.207: 99.2388% ( 6) 00:09:25.346 20733.207 - 20852.364: 99.2788% ( 5) 00:09:25.346 20852.364 - 20971.520: 99.2869% ( 1) 00:09:25.346 21090.676 - 21209.833: 99.3029% ( 2) 00:09:25.346 21328.989 - 21448.145: 99.4151% ( 14) 00:09:25.346 21448.145 - 21567.302: 99.4471% ( 4) 00:09:25.346 21567.302 - 21686.458: 99.4631% ( 2) 00:09:25.346 21686.458 - 21805.615: 99.4872% ( 3) 00:09:25.346 27763.433 - 27882.589: 99.5353% ( 6) 00:09:25.346 27882.589 - 28001.745: 99.6154% ( 10) 00:09:25.346 28001.745 - 28120.902: 99.6554% ( 5) 00:09:25.346 28120.902 - 28240.058: 99.6875% ( 4) 00:09:25.346 28240.058 - 28359.215: 99.7196% ( 4) 00:09:25.346 28359.215 - 28478.371: 99.7436% ( 3) 00:09:25.346 28478.371 - 28597.527: 99.7516% ( 1) 00:09:25.346 28597.527 - 28716.684: 99.7676% ( 2) 00:09:25.346 28716.684 - 28835.840: 99.7917% ( 3) 00:09:25.346 28835.840 - 28954.996: 99.8237% ( 4) 00:09:25.346 28954.996 - 29074.153: 99.8638% ( 5) 00:09:25.346 29074.153 - 29193.309: 99.9038% ( 5) 00:09:25.346 29193.309 - 29312.465: 99.9359% ( 4) 00:09:25.346 29312.465 - 29431.622: 99.9760% ( 5) 00:09:25.346 29431.622 - 29550.778: 100.0000% ( 3) 00:09:25.346 00:09:25.346 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:25.346 ============================================================================== 00:09:25.346 Range in us Cumulative IO count 00:09:25.346 7447.273 - 7477.062: 0.0080% ( 1) 00:09:25.346 7477.062 - 7506.851: 0.0160% ( 1) 00:09:25.346 7506.851 - 7536.640: 0.0881% ( 9) 00:09:25.346 7536.640 - 7566.429: 0.1042% ( 2) 00:09:25.346 7566.429 - 7596.218: 0.1202% ( 2) 00:09:25.346 7596.218 - 7626.007: 0.1362% ( 2) 00:09:25.346 7626.007 - 7685.585: 0.1603% ( 3) 00:09:25.346 7685.585 - 7745.164: 0.1923% ( 4) 00:09:25.346 7745.164 - 7804.742: 0.2244% ( 4) 00:09:25.346 7804.742 - 7864.320: 0.2644% ( 5) 00:09:25.346 7864.320 - 7923.898: 0.2885% ( 3) 00:09:25.346 7923.898 - 7983.476: 0.3205% ( 4) 00:09:25.346 7983.476 - 8043.055: 0.3606% ( 5) 00:09:25.346 8043.055 - 8102.633: 0.3926% ( 4) 00:09:25.346 8102.633 - 8162.211: 0.4167% ( 3) 00:09:25.346 8162.211 - 8221.789: 0.4487% ( 4) 00:09:25.346 8221.789 - 8281.367: 0.4808% ( 4) 00:09:25.346 8281.367 - 8340.945: 0.5208% ( 5) 00:09:25.346 8340.945 - 8400.524: 0.6330% ( 14) 00:09:25.346 8400.524 - 8460.102: 0.9054% ( 34) 00:09:25.346 8460.102 - 8519.680: 1.2981% ( 49) 00:09:25.346 8519.680 - 8579.258: 1.8990% ( 75) 00:09:25.346 8579.258 - 8638.836: 2.6603% ( 95) 00:09:25.346 8638.836 - 8698.415: 3.5256% ( 108) 00:09:25.346 8698.415 - 8757.993: 4.4231% ( 112) 00:09:25.346 8757.993 - 8817.571: 5.6090% ( 148) 00:09:25.347 8817.571 - 8877.149: 6.9712% ( 170) 00:09:25.347 8877.149 - 8936.727: 8.5176% ( 193) 00:09:25.347 8936.727 - 8996.305: 10.1282% ( 201) 00:09:25.347 8996.305 - 9055.884: 11.6186% ( 186) 00:09:25.347 9055.884 - 9115.462: 13.1330% ( 189) 00:09:25.347 9115.462 - 9175.040: 14.8638% ( 216) 00:09:25.347 9175.040 - 9234.618: 16.7468% ( 235) 00:09:25.347 9234.618 - 9294.196: 18.5577% ( 226) 00:09:25.347 9294.196 - 9353.775: 20.4647% ( 238) 00:09:25.347 9353.775 - 9413.353: 22.7404% ( 284) 00:09:25.347 9413.353 - 9472.931: 25.0080% ( 283) 00:09:25.347 9472.931 - 9532.509: 27.1394% ( 266) 00:09:25.347 9532.509 - 9592.087: 29.4391% ( 287) 00:09:25.347 9592.087 - 9651.665: 32.0032% ( 320) 00:09:25.347 9651.665 - 9711.244: 34.5913% ( 323) 00:09:25.347 9711.244 - 9770.822: 37.6442% ( 381) 00:09:25.347 9770.822 - 9830.400: 40.9295% ( 410) 00:09:25.347 9830.400 - 9889.978: 44.3990% ( 433) 00:09:25.347 9889.978 - 9949.556: 47.8686% ( 433) 00:09:25.347 9949.556 - 10009.135: 51.1939% ( 415) 00:09:25.347 10009.135 - 10068.713: 54.2308% ( 379) 00:09:25.347 10068.713 - 10128.291: 57.1154% ( 360) 00:09:25.347 10128.291 - 10187.869: 59.8397% ( 340) 00:09:25.347 10187.869 - 10247.447: 62.7644% ( 365) 00:09:25.347 10247.447 - 10307.025: 65.5208% ( 344) 00:09:25.347 10307.025 - 10366.604: 68.0929% ( 321) 00:09:25.347 10366.604 - 10426.182: 70.6010% ( 313) 00:09:25.347 10426.182 - 10485.760: 73.1891% ( 323) 00:09:25.347 10485.760 - 10545.338: 75.5529% ( 295) 00:09:25.347 10545.338 - 10604.916: 77.5801% ( 253) 00:09:25.347 10604.916 - 10664.495: 79.2548% ( 209) 00:09:25.347 10664.495 - 10724.073: 80.7532% ( 187) 00:09:25.347 10724.073 - 10783.651: 82.1394% ( 173) 00:09:25.347 10783.651 - 10843.229: 83.1010% ( 120) 00:09:25.347 10843.229 - 10902.807: 83.9824% ( 110) 00:09:25.347 10902.807 - 10962.385: 84.8237% ( 105) 00:09:25.347 10962.385 - 11021.964: 85.4327% ( 76) 00:09:25.347 11021.964 - 11081.542: 85.9776% ( 68) 00:09:25.347 11081.542 - 11141.120: 86.5304% ( 69) 00:09:25.347 11141.120 - 11200.698: 87.0593% ( 66) 00:09:25.347 11200.698 - 11260.276: 87.6042% ( 68) 00:09:25.347 11260.276 - 11319.855: 88.1651% ( 70) 00:09:25.347 11319.855 - 11379.433: 88.7821% ( 77) 00:09:25.347 11379.433 - 11439.011: 89.6795% ( 112) 00:09:25.347 11439.011 - 11498.589: 90.0641% ( 48) 00:09:25.347 11498.589 - 11558.167: 90.5929% ( 66) 00:09:25.347 11558.167 - 11617.745: 90.8974% ( 38) 00:09:25.347 11617.745 - 11677.324: 91.2660% ( 46) 00:09:25.347 11677.324 - 11736.902: 91.6186% ( 44) 00:09:25.347 11736.902 - 11796.480: 92.0272% ( 51) 00:09:25.347 11796.480 - 11856.058: 92.4359% ( 51) 00:09:25.347 11856.058 - 11915.636: 92.8125% ( 47) 00:09:25.347 11915.636 - 11975.215: 93.3013% ( 61) 00:09:25.347 11975.215 - 12034.793: 93.6859% ( 48) 00:09:25.347 12034.793 - 12094.371: 94.0625% ( 47) 00:09:25.347 12094.371 - 12153.949: 94.4551% ( 49) 00:09:25.347 12153.949 - 12213.527: 94.7917% ( 42) 00:09:25.347 12213.527 - 12273.105: 95.1843% ( 49) 00:09:25.347 12273.105 - 12332.684: 95.5529% ( 46) 00:09:25.347 12332.684 - 12392.262: 95.8413% ( 36) 00:09:25.347 12392.262 - 12451.840: 96.0657% ( 28) 00:09:25.347 12451.840 - 12511.418: 96.2901% ( 28) 00:09:25.347 12511.418 - 12570.996: 96.5545% ( 33) 00:09:25.347 12570.996 - 12630.575: 96.7228% ( 21) 00:09:25.347 12630.575 - 12690.153: 96.9231% ( 25) 00:09:25.347 12690.153 - 12749.731: 97.0433% ( 15) 00:09:25.347 12749.731 - 12809.309: 97.1554% ( 14) 00:09:25.347 12809.309 - 12868.887: 97.2756% ( 15) 00:09:25.347 12868.887 - 12928.465: 97.3958% ( 15) 00:09:25.347 12928.465 - 12988.044: 97.4679% ( 9) 00:09:25.347 12988.044 - 13047.622: 97.5321% ( 8) 00:09:25.347 13047.622 - 13107.200: 97.5801% ( 6) 00:09:25.347 13107.200 - 13166.778: 97.6202% ( 5) 00:09:25.347 13166.778 - 13226.356: 97.6683% ( 6) 00:09:25.347 13226.356 - 13285.935: 97.7083% ( 5) 00:09:25.347 13285.935 - 13345.513: 97.7564% ( 6) 00:09:25.347 13345.513 - 13405.091: 97.8205% ( 8) 00:09:25.347 13405.091 - 13464.669: 97.8846% ( 8) 00:09:25.347 13464.669 - 13524.247: 98.0048% ( 15) 00:09:25.347 13524.247 - 13583.825: 98.1651% ( 20) 00:09:25.347 13583.825 - 13643.404: 98.2933% ( 16) 00:09:25.347 13643.404 - 13702.982: 98.3814% ( 11) 00:09:25.347 13702.982 - 13762.560: 98.4375% ( 7) 00:09:25.347 13762.560 - 13822.138: 98.4936% ( 7) 00:09:25.347 13822.138 - 13881.716: 98.5497% ( 7) 00:09:25.347 13881.716 - 13941.295: 98.6058% ( 7) 00:09:25.347 13941.295 - 14000.873: 98.6458% ( 5) 00:09:25.347 14000.873 - 14060.451: 98.6939% ( 6) 00:09:25.347 14060.451 - 14120.029: 98.7420% ( 6) 00:09:25.347 14120.029 - 14179.607: 98.7821% ( 5) 00:09:25.347 14179.607 - 14239.185: 98.8301% ( 6) 00:09:25.347 14239.185 - 14298.764: 98.8862% ( 7) 00:09:25.347 14298.764 - 14358.342: 98.9343% ( 6) 00:09:25.347 14358.342 - 14417.920: 98.9744% ( 5) 00:09:25.347 20256.582 - 20375.738: 98.9984% ( 3) 00:09:25.347 20375.738 - 20494.895: 99.0465% ( 6) 00:09:25.347 20494.895 - 20614.051: 99.0946% ( 6) 00:09:25.347 20614.051 - 20733.207: 99.1346% ( 5) 00:09:25.347 20733.207 - 20852.364: 99.1827% ( 6) 00:09:25.347 20852.364 - 20971.520: 99.2308% ( 6) 00:09:25.347 20971.520 - 21090.676: 99.2628% ( 4) 00:09:25.347 21090.676 - 21209.833: 99.3029% ( 5) 00:09:25.347 21209.833 - 21328.989: 99.3510% ( 6) 00:09:25.347 21328.989 - 21448.145: 99.3910% ( 5) 00:09:25.347 21448.145 - 21567.302: 99.4311% ( 5) 00:09:25.347 21567.302 - 21686.458: 99.4792% ( 6) 00:09:25.347 21686.458 - 21805.615: 99.4872% ( 1) 00:09:25.347 26929.338 - 27048.495: 99.5112% ( 3) 00:09:25.347 27048.495 - 27167.651: 99.5513% ( 5) 00:09:25.347 27167.651 - 27286.807: 99.5994% ( 6) 00:09:25.347 27286.807 - 27405.964: 99.6234% ( 3) 00:09:25.347 27405.964 - 27525.120: 99.6715% ( 6) 00:09:25.347 27525.120 - 27644.276: 99.7196% ( 6) 00:09:25.347 27644.276 - 27763.433: 99.7676% ( 6) 00:09:25.347 27763.433 - 27882.589: 99.8077% ( 5) 00:09:25.347 27882.589 - 28001.745: 99.8478% ( 5) 00:09:25.347 28001.745 - 28120.902: 99.8798% ( 4) 00:09:25.347 28120.902 - 28240.058: 99.9279% ( 6) 00:09:25.347 28240.058 - 28359.215: 99.9760% ( 6) 00:09:25.347 28359.215 - 28478.371: 100.0000% ( 3) 00:09:25.347 00:09:25.347 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:25.347 ============================================================================== 00:09:25.347 Range in us Cumulative IO count 00:09:25.347 6553.600 - 6583.389: 0.0080% ( 1) 00:09:25.347 6583.389 - 6613.178: 0.1202% ( 14) 00:09:25.347 6613.178 - 6642.967: 0.2083% ( 11) 00:09:25.347 6642.967 - 6672.756: 0.2163% ( 1) 00:09:25.347 6672.756 - 6702.545: 0.2324% ( 2) 00:09:25.347 6702.545 - 6732.335: 0.2484% ( 2) 00:09:25.347 6732.335 - 6762.124: 0.2564% ( 1) 00:09:25.347 6762.124 - 6791.913: 0.2724% ( 2) 00:09:25.347 6791.913 - 6821.702: 0.2885% ( 2) 00:09:25.347 6821.702 - 6851.491: 0.2965% ( 1) 00:09:25.347 6851.491 - 6881.280: 0.3125% ( 2) 00:09:25.347 6881.280 - 6911.069: 0.3205% ( 1) 00:09:25.347 6911.069 - 6940.858: 0.3365% ( 2) 00:09:25.347 6940.858 - 6970.647: 0.3526% ( 2) 00:09:25.347 6970.647 - 7000.436: 0.3606% ( 1) 00:09:25.347 7000.436 - 7030.225: 0.3766% ( 2) 00:09:25.347 7030.225 - 7060.015: 0.3846% ( 1) 00:09:25.347 7060.015 - 7089.804: 0.4006% ( 2) 00:09:25.347 7089.804 - 7119.593: 0.4167% ( 2) 00:09:25.347 7119.593 - 7149.382: 0.4327% ( 2) 00:09:25.347 7149.382 - 7179.171: 0.4487% ( 2) 00:09:25.347 7179.171 - 7208.960: 0.4567% ( 1) 00:09:25.347 7208.960 - 7238.749: 0.4728% ( 2) 00:09:25.347 7238.749 - 7268.538: 0.4888% ( 2) 00:09:25.347 7268.538 - 7298.327: 0.5048% ( 2) 00:09:25.347 7298.327 - 7328.116: 0.5128% ( 1) 00:09:25.347 8162.211 - 8221.789: 0.5288% ( 2) 00:09:25.347 8221.789 - 8281.367: 0.5929% ( 8) 00:09:25.347 8281.367 - 8340.945: 0.6571% ( 8) 00:09:25.347 8340.945 - 8400.524: 0.7933% ( 17) 00:09:25.347 8400.524 - 8460.102: 1.0256% ( 29) 00:09:25.347 8460.102 - 8519.680: 1.4503% ( 53) 00:09:25.347 8519.680 - 8579.258: 2.1394% ( 86) 00:09:25.347 8579.258 - 8638.836: 3.0769% ( 117) 00:09:25.347 8638.836 - 8698.415: 4.1667% ( 136) 00:09:25.347 8698.415 - 8757.993: 5.1763% ( 126) 00:09:25.347 8757.993 - 8817.571: 6.2340% ( 132) 00:09:25.347 8817.571 - 8877.149: 7.2756% ( 130) 00:09:25.347 8877.149 - 8936.727: 8.4615% ( 148) 00:09:25.347 8936.727 - 8996.305: 9.7276% ( 158) 00:09:25.347 8996.305 - 9055.884: 11.2179% ( 186) 00:09:25.347 9055.884 - 9115.462: 12.9327% ( 214) 00:09:25.347 9115.462 - 9175.040: 14.7196% ( 223) 00:09:25.347 9175.040 - 9234.618: 16.5064% ( 223) 00:09:25.347 9234.618 - 9294.196: 18.3333% ( 228) 00:09:25.347 9294.196 - 9353.775: 20.1843% ( 231) 00:09:25.347 9353.775 - 9413.353: 22.5881% ( 300) 00:09:25.347 9413.353 - 9472.931: 24.8638% ( 284) 00:09:25.347 9472.931 - 9532.509: 27.0593% ( 274) 00:09:25.347 9532.509 - 9592.087: 29.5833% ( 315) 00:09:25.347 9592.087 - 9651.665: 32.1074% ( 315) 00:09:25.347 9651.665 - 9711.244: 35.0561% ( 368) 00:09:25.347 9711.244 - 9770.822: 37.9247% ( 358) 00:09:25.347 9770.822 - 9830.400: 41.0897% ( 395) 00:09:25.347 9830.400 - 9889.978: 44.3189% ( 403) 00:09:25.347 9889.978 - 9949.556: 47.7163% ( 424) 00:09:25.347 9949.556 - 10009.135: 51.1939% ( 434) 00:09:25.347 10009.135 - 10068.713: 54.4631% ( 408) 00:09:25.347 10068.713 - 10128.291: 57.6603% ( 399) 00:09:25.347 10128.291 - 10187.869: 60.4888% ( 353) 00:09:25.347 10187.869 - 10247.447: 63.0689% ( 322) 00:09:25.347 10247.447 - 10307.025: 65.6731% ( 325) 00:09:25.347 10307.025 - 10366.604: 68.2212% ( 318) 00:09:25.347 10366.604 - 10426.182: 70.6170% ( 299) 00:09:25.347 10426.182 - 10485.760: 72.8045% ( 273) 00:09:25.347 10485.760 - 10545.338: 75.0481% ( 280) 00:09:25.347 10545.338 - 10604.916: 77.0994% ( 256) 00:09:25.347 10604.916 - 10664.495: 79.2708% ( 271) 00:09:25.347 10664.495 - 10724.073: 81.0417% ( 221) 00:09:25.347 10724.073 - 10783.651: 82.5801% ( 192) 00:09:25.348 10783.651 - 10843.229: 83.9183% ( 167) 00:09:25.348 10843.229 - 10902.807: 85.0721% ( 144) 00:09:25.348 10902.807 - 10962.385: 85.9215% ( 106) 00:09:25.348 10962.385 - 11021.964: 86.5865% ( 83) 00:09:25.348 11021.964 - 11081.542: 87.1154% ( 66) 00:09:25.348 11081.542 - 11141.120: 87.6763% ( 70) 00:09:25.348 11141.120 - 11200.698: 88.2131% ( 67) 00:09:25.348 11200.698 - 11260.276: 88.7019% ( 61) 00:09:25.348 11260.276 - 11319.855: 89.0946% ( 49) 00:09:25.348 11319.855 - 11379.433: 89.4872% ( 49) 00:09:25.348 11379.433 - 11439.011: 89.8077% ( 40) 00:09:25.348 11439.011 - 11498.589: 90.1843% ( 47) 00:09:25.348 11498.589 - 11558.167: 90.5128% ( 41) 00:09:25.348 11558.167 - 11617.745: 90.8894% ( 47) 00:09:25.348 11617.745 - 11677.324: 91.3061% ( 52) 00:09:25.348 11677.324 - 11736.902: 91.7468% ( 55) 00:09:25.348 11736.902 - 11796.480: 92.2035% ( 57) 00:09:25.348 11796.480 - 11856.058: 92.5481% ( 43) 00:09:25.348 11856.058 - 11915.636: 92.9167% ( 46) 00:09:25.348 11915.636 - 11975.215: 93.2452% ( 41) 00:09:25.348 11975.215 - 12034.793: 93.5577% ( 39) 00:09:25.348 12034.793 - 12094.371: 93.9022% ( 43) 00:09:25.348 12094.371 - 12153.949: 94.2228% ( 40) 00:09:25.348 12153.949 - 12213.527: 94.4872% ( 33) 00:09:25.348 12213.527 - 12273.105: 94.7676% ( 35) 00:09:25.348 12273.105 - 12332.684: 95.0080% ( 30) 00:09:25.348 12332.684 - 12392.262: 95.2724% ( 33) 00:09:25.348 12392.262 - 12451.840: 95.5609% ( 36) 00:09:25.348 12451.840 - 12511.418: 95.8654% ( 38) 00:09:25.348 12511.418 - 12570.996: 96.1458% ( 35) 00:09:25.348 12570.996 - 12630.575: 96.4022% ( 32) 00:09:25.348 12630.575 - 12690.153: 96.5705% ( 21) 00:09:25.348 12690.153 - 12749.731: 96.7147% ( 18) 00:09:25.348 12749.731 - 12809.309: 96.8670% ( 19) 00:09:25.348 12809.309 - 12868.887: 97.0513% ( 23) 00:09:25.348 12868.887 - 12928.465: 97.2276% ( 22) 00:09:25.348 12928.465 - 12988.044: 97.3958% ( 21) 00:09:25.348 12988.044 - 13047.622: 97.5561% ( 20) 00:09:25.348 13047.622 - 13107.200: 97.6763% ( 15) 00:09:25.348 13107.200 - 13166.778: 97.7724% ( 12) 00:09:25.348 13166.778 - 13226.356: 97.8365% ( 8) 00:09:25.348 13226.356 - 13285.935: 97.9087% ( 9) 00:09:25.348 13285.935 - 13345.513: 97.9728% ( 8) 00:09:25.348 13345.513 - 13405.091: 98.0449% ( 9) 00:09:25.348 13405.091 - 13464.669: 98.1330% ( 11) 00:09:25.348 13464.669 - 13524.247: 98.1891% ( 7) 00:09:25.348 13524.247 - 13583.825: 98.2692% ( 10) 00:09:25.348 13583.825 - 13643.404: 98.3494% ( 10) 00:09:25.348 13643.404 - 13702.982: 98.4375% ( 11) 00:09:25.348 13702.982 - 13762.560: 98.5256% ( 11) 00:09:25.348 13762.560 - 13822.138: 98.5978% ( 9) 00:09:25.348 13822.138 - 13881.716: 98.6619% ( 8) 00:09:25.348 13881.716 - 13941.295: 98.7099% ( 6) 00:09:25.348 13941.295 - 14000.873: 98.7179% ( 1) 00:09:25.348 14000.873 - 14060.451: 98.7420% ( 3) 00:09:25.348 14060.451 - 14120.029: 98.7580% ( 2) 00:09:25.348 14120.029 - 14179.607: 98.7740% ( 2) 00:09:25.348 14179.607 - 14239.185: 98.7901% ( 2) 00:09:25.348 14239.185 - 14298.764: 98.8141% ( 3) 00:09:25.348 14298.764 - 14358.342: 98.8381% ( 3) 00:09:25.348 14358.342 - 14417.920: 98.8542% ( 2) 00:09:25.348 14417.920 - 14477.498: 98.8702% ( 2) 00:09:25.348 14477.498 - 14537.076: 98.8862% ( 2) 00:09:25.348 14537.076 - 14596.655: 98.9103% ( 3) 00:09:25.348 14596.655 - 14656.233: 98.9263% ( 2) 00:09:25.348 14656.233 - 14715.811: 98.9503% ( 3) 00:09:25.348 14715.811 - 14775.389: 98.9663% ( 2) 00:09:25.348 14775.389 - 14834.967: 98.9744% ( 1) 00:09:25.348 20494.895 - 20614.051: 98.9904% ( 2) 00:09:25.348 20614.051 - 20733.207: 99.0785% ( 11) 00:09:25.348 20733.207 - 20852.364: 99.1346% ( 7) 00:09:25.348 20852.364 - 20971.520: 99.2067% ( 9) 00:09:25.348 20971.520 - 21090.676: 99.2708% ( 8) 00:09:25.348 21090.676 - 21209.833: 99.3029% ( 4) 00:09:25.348 21209.833 - 21328.989: 99.3269% ( 3) 00:09:25.348 21328.989 - 21448.145: 99.3670% ( 5) 00:09:25.348 21448.145 - 21567.302: 99.4071% ( 5) 00:09:25.348 21567.302 - 21686.458: 99.4311% ( 3) 00:09:25.348 21686.458 - 21805.615: 99.4631% ( 4) 00:09:25.348 21805.615 - 21924.771: 99.4872% ( 3) 00:09:25.348 26214.400 - 26333.556: 99.4952% ( 1) 00:09:25.348 26333.556 - 26452.713: 99.5353% ( 5) 00:09:25.348 26452.713 - 26571.869: 99.5753% ( 5) 00:09:25.348 26571.869 - 26691.025: 99.6394% ( 8) 00:09:25.348 27048.495 - 27167.651: 99.6474% ( 1) 00:09:25.348 27167.651 - 27286.807: 99.6875% ( 5) 00:09:25.348 27286.807 - 27405.964: 99.7196% ( 4) 00:09:25.348 27405.964 - 27525.120: 99.7436% ( 3) 00:09:25.348 27525.120 - 27644.276: 99.7756% ( 4) 00:09:25.348 27644.276 - 27763.433: 99.8077% ( 4) 00:09:25.348 27763.433 - 27882.589: 99.8397% ( 4) 00:09:25.348 27882.589 - 28001.745: 99.8798% ( 5) 00:09:25.348 28001.745 - 28120.902: 99.9119% ( 4) 00:09:25.348 28120.902 - 28240.058: 99.9599% ( 6) 00:09:25.348 28240.058 - 28359.215: 99.9840% ( 3) 00:09:25.348 28359.215 - 28478.371: 100.0000% ( 2) 00:09:25.348 00:09:25.348 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:25.348 ============================================================================== 00:09:25.348 Range in us Cumulative IO count 00:09:25.348 5928.029 - 5957.818: 0.0561% ( 7) 00:09:25.348 5957.818 - 5987.607: 0.0962% ( 5) 00:09:25.348 5987.607 - 6017.396: 0.2163% ( 15) 00:09:25.348 6017.396 - 6047.185: 0.2644% ( 6) 00:09:25.348 6047.185 - 6076.975: 0.2724% ( 1) 00:09:25.348 6076.975 - 6106.764: 0.2885% ( 2) 00:09:25.348 6106.764 - 6136.553: 0.2965% ( 1) 00:09:25.348 6136.553 - 6166.342: 0.3125% ( 2) 00:09:25.348 6166.342 - 6196.131: 0.3205% ( 1) 00:09:25.348 6196.131 - 6225.920: 0.3365% ( 2) 00:09:25.348 6225.920 - 6255.709: 0.3446% ( 1) 00:09:25.348 6255.709 - 6285.498: 0.3606% ( 2) 00:09:25.348 6285.498 - 6315.287: 0.3766% ( 2) 00:09:25.348 6315.287 - 6345.076: 0.3846% ( 1) 00:09:25.348 6345.076 - 6374.865: 0.4006% ( 2) 00:09:25.348 6374.865 - 6404.655: 0.4087% ( 1) 00:09:25.348 6404.655 - 6434.444: 0.4167% ( 1) 00:09:25.348 6434.444 - 6464.233: 0.4327% ( 2) 00:09:25.348 6464.233 - 6494.022: 0.4487% ( 2) 00:09:25.348 6494.022 - 6523.811: 0.4647% ( 2) 00:09:25.348 6523.811 - 6553.600: 0.4728% ( 1) 00:09:25.348 6553.600 - 6583.389: 0.4888% ( 2) 00:09:25.348 6583.389 - 6613.178: 0.5048% ( 2) 00:09:25.348 6613.178 - 6642.967: 0.5128% ( 1) 00:09:25.348 8162.211 - 8221.789: 0.5208% ( 1) 00:09:25.348 8221.789 - 8281.367: 0.5529% ( 4) 00:09:25.348 8281.367 - 8340.945: 0.6410% ( 11) 00:09:25.348 8340.945 - 8400.524: 0.8013% ( 20) 00:09:25.348 8400.524 - 8460.102: 1.0737% ( 34) 00:09:25.348 8460.102 - 8519.680: 1.3782% ( 38) 00:09:25.348 8519.680 - 8579.258: 2.0433% ( 83) 00:09:25.348 8579.258 - 8638.836: 2.8205% ( 97) 00:09:25.348 8638.836 - 8698.415: 3.7580% ( 117) 00:09:25.348 8698.415 - 8757.993: 4.7676% ( 126) 00:09:25.348 8757.993 - 8817.571: 6.0016% ( 154) 00:09:25.348 8817.571 - 8877.149: 7.2756% ( 159) 00:09:25.348 8877.149 - 8936.727: 8.5256% ( 156) 00:09:25.348 8936.727 - 8996.305: 9.9519% ( 178) 00:09:25.348 8996.305 - 9055.884: 11.4022% ( 181) 00:09:25.348 9055.884 - 9115.462: 12.8686% ( 183) 00:09:25.348 9115.462 - 9175.040: 14.5112% ( 205) 00:09:25.348 9175.040 - 9234.618: 16.4423% ( 241) 00:09:25.348 9234.618 - 9294.196: 18.4135% ( 246) 00:09:25.348 9294.196 - 9353.775: 20.4006% ( 248) 00:09:25.348 9353.775 - 9413.353: 22.5481% ( 268) 00:09:25.348 9413.353 - 9472.931: 24.8638% ( 289) 00:09:25.348 9472.931 - 9532.509: 27.2115% ( 293) 00:09:25.348 9532.509 - 9592.087: 29.7035% ( 311) 00:09:25.348 9592.087 - 9651.665: 32.1635% ( 307) 00:09:25.348 9651.665 - 9711.244: 34.8397% ( 334) 00:09:25.348 9711.244 - 9770.822: 37.8446% ( 375) 00:09:25.348 9770.822 - 9830.400: 40.8814% ( 379) 00:09:25.348 9830.400 - 9889.978: 44.1747% ( 411) 00:09:25.348 9889.978 - 9949.556: 47.5401% ( 420) 00:09:25.348 9949.556 - 10009.135: 50.9215% ( 422) 00:09:25.348 10009.135 - 10068.713: 54.0304% ( 388) 00:09:25.348 10068.713 - 10128.291: 57.1715% ( 392) 00:09:25.348 10128.291 - 10187.869: 59.9840% ( 351) 00:09:25.348 10187.869 - 10247.447: 62.6923% ( 338) 00:09:25.348 10247.447 - 10307.025: 65.3766% ( 335) 00:09:25.348 10307.025 - 10366.604: 68.1571% ( 347) 00:09:25.348 10366.604 - 10426.182: 70.4968% ( 292) 00:09:25.348 10426.182 - 10485.760: 72.7083% ( 276) 00:09:25.348 10485.760 - 10545.338: 74.8878% ( 272) 00:09:25.348 10545.338 - 10604.916: 77.0513% ( 270) 00:09:25.348 10604.916 - 10664.495: 78.9503% ( 237) 00:09:25.348 10664.495 - 10724.073: 80.5288% ( 197) 00:09:25.348 10724.073 - 10783.651: 82.0994% ( 196) 00:09:25.348 10783.651 - 10843.229: 83.5337% ( 179) 00:09:25.348 10843.229 - 10902.807: 84.7356% ( 150) 00:09:25.348 10902.807 - 10962.385: 85.7772% ( 130) 00:09:25.348 10962.385 - 11021.964: 86.5465% ( 96) 00:09:25.348 11021.964 - 11081.542: 87.2837% ( 92) 00:09:25.348 11081.542 - 11141.120: 87.9327% ( 81) 00:09:25.348 11141.120 - 11200.698: 88.4615% ( 66) 00:09:25.348 11200.698 - 11260.276: 88.9904% ( 66) 00:09:25.348 11260.276 - 11319.855: 89.4071% ( 52) 00:09:25.348 11319.855 - 11379.433: 89.7356% ( 41) 00:09:25.348 11379.433 - 11439.011: 90.0401% ( 38) 00:09:25.348 11439.011 - 11498.589: 90.3766% ( 42) 00:09:25.348 11498.589 - 11558.167: 90.7051% ( 41) 00:09:25.348 11558.167 - 11617.745: 91.0978% ( 49) 00:09:25.348 11617.745 - 11677.324: 91.4984% ( 50) 00:09:25.348 11677.324 - 11736.902: 91.8910% ( 49) 00:09:25.348 11736.902 - 11796.480: 92.3157% ( 53) 00:09:25.348 11796.480 - 11856.058: 92.7244% ( 51) 00:09:25.348 11856.058 - 11915.636: 93.0529% ( 41) 00:09:25.348 11915.636 - 11975.215: 93.3333% ( 35) 00:09:25.348 11975.215 - 12034.793: 93.5737% ( 30) 00:09:25.348 12034.793 - 12094.371: 93.7981% ( 28) 00:09:25.348 12094.371 - 12153.949: 94.0064% ( 26) 00:09:25.348 12153.949 - 12213.527: 94.2548% ( 31) 00:09:25.348 12213.527 - 12273.105: 94.5673% ( 39) 00:09:25.348 12273.105 - 12332.684: 94.9038% ( 42) 00:09:25.348 12332.684 - 12392.262: 95.2644% ( 45) 00:09:25.348 12392.262 - 12451.840: 95.6090% ( 43) 00:09:25.348 12451.840 - 12511.418: 96.0096% ( 50) 00:09:25.349 12511.418 - 12570.996: 96.2981% ( 36) 00:09:25.349 12570.996 - 12630.575: 96.5545% ( 32) 00:09:25.349 12630.575 - 12690.153: 96.7788% ( 28) 00:09:25.349 12690.153 - 12749.731: 96.9952% ( 27) 00:09:25.349 12749.731 - 12809.309: 97.1554% ( 20) 00:09:25.349 12809.309 - 12868.887: 97.3958% ( 30) 00:09:25.349 12868.887 - 12928.465: 97.5401% ( 18) 00:09:25.349 12928.465 - 12988.044: 97.6763% ( 17) 00:09:25.349 12988.044 - 13047.622: 97.8125% ( 17) 00:09:25.349 13047.622 - 13107.200: 97.9247% ( 14) 00:09:25.349 13107.200 - 13166.778: 98.0208% ( 12) 00:09:25.349 13166.778 - 13226.356: 98.1170% ( 12) 00:09:25.349 13226.356 - 13285.935: 98.1971% ( 10) 00:09:25.349 13285.935 - 13345.513: 98.2612% ( 8) 00:09:25.349 13345.513 - 13405.091: 98.3413% ( 10) 00:09:25.349 13405.091 - 13464.669: 98.4135% ( 9) 00:09:25.349 13464.669 - 13524.247: 98.4936% ( 10) 00:09:25.349 13524.247 - 13583.825: 98.5577% ( 8) 00:09:25.349 13583.825 - 13643.404: 98.5978% ( 5) 00:09:25.349 13643.404 - 13702.982: 98.6138% ( 2) 00:09:25.349 13702.982 - 13762.560: 98.6458% ( 4) 00:09:25.349 13762.560 - 13822.138: 98.6699% ( 3) 00:09:25.349 13822.138 - 13881.716: 98.6779% ( 1) 00:09:25.349 13881.716 - 13941.295: 98.7019% ( 3) 00:09:25.349 13941.295 - 14000.873: 98.7179% ( 2) 00:09:25.349 14000.873 - 14060.451: 98.7340% ( 2) 00:09:25.349 14060.451 - 14120.029: 98.7420% ( 1) 00:09:25.349 14120.029 - 14179.607: 98.7660% ( 3) 00:09:25.349 14179.607 - 14239.185: 98.7821% ( 2) 00:09:25.349 14239.185 - 14298.764: 98.7981% ( 2) 00:09:25.349 14298.764 - 14358.342: 98.8221% ( 3) 00:09:25.349 14358.342 - 14417.920: 98.8381% ( 2) 00:09:25.349 14417.920 - 14477.498: 98.8622% ( 3) 00:09:25.349 14477.498 - 14537.076: 98.8782% ( 2) 00:09:25.349 14537.076 - 14596.655: 98.9022% ( 3) 00:09:25.349 14596.655 - 14656.233: 98.9263% ( 3) 00:09:25.349 14656.233 - 14715.811: 98.9503% ( 3) 00:09:25.349 14715.811 - 14775.389: 98.9744% ( 3) 00:09:25.349 19065.018 - 19184.175: 98.9904% ( 2) 00:09:25.349 19184.175 - 19303.331: 99.0304% ( 5) 00:09:25.349 19303.331 - 19422.487: 99.0785% ( 6) 00:09:25.349 19422.487 - 19541.644: 99.1186% ( 5) 00:09:25.349 19541.644 - 19660.800: 99.1667% ( 6) 00:09:25.349 19660.800 - 19779.956: 99.2067% ( 5) 00:09:25.349 19779.956 - 19899.113: 99.2388% ( 4) 00:09:25.349 19899.113 - 20018.269: 99.2708% ( 4) 00:09:25.349 20018.269 - 20137.425: 99.3109% ( 5) 00:09:25.349 20137.425 - 20256.582: 99.3510% ( 5) 00:09:25.349 20256.582 - 20375.738: 99.3990% ( 6) 00:09:25.349 20375.738 - 20494.895: 99.4391% ( 5) 00:09:25.349 20494.895 - 20614.051: 99.4712% ( 4) 00:09:25.349 20614.051 - 20733.207: 99.4872% ( 2) 00:09:25.349 25856.931 - 25976.087: 99.5112% ( 3) 00:09:25.349 25976.087 - 26095.244: 99.5593% ( 6) 00:09:25.349 26095.244 - 26214.400: 99.6074% ( 6) 00:09:25.349 26214.400 - 26333.556: 99.6394% ( 4) 00:09:25.349 26333.556 - 26452.713: 99.6875% ( 6) 00:09:25.349 26452.713 - 26571.869: 99.7196% ( 4) 00:09:25.349 26571.869 - 26691.025: 99.7596% ( 5) 00:09:25.349 26691.025 - 26810.182: 99.7997% ( 5) 00:09:25.349 26810.182 - 26929.338: 99.8317% ( 4) 00:09:25.349 26929.338 - 27048.495: 99.8718% ( 5) 00:09:25.349 27048.495 - 27167.651: 99.9038% ( 4) 00:09:25.349 27167.651 - 27286.807: 99.9279% ( 3) 00:09:25.349 27286.807 - 27405.964: 99.9760% ( 6) 00:09:25.349 27405.964 - 27525.120: 100.0000% ( 3) 00:09:25.349 00:09:25.349 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:25.349 ============================================================================== 00:09:25.349 Range in us Cumulative IO count 00:09:25.349 5332.247 - 5362.036: 0.0080% ( 1) 00:09:25.349 5362.036 - 5391.825: 0.1362% ( 16) 00:09:25.349 5391.825 - 5421.615: 0.1683% ( 4) 00:09:25.349 5421.615 - 5451.404: 0.2324% ( 8) 00:09:25.349 5451.404 - 5481.193: 0.2404% ( 1) 00:09:25.349 5481.193 - 5510.982: 0.2564% ( 2) 00:09:25.349 5510.982 - 5540.771: 0.2644% ( 1) 00:09:25.349 5540.771 - 5570.560: 0.2804% ( 2) 00:09:25.349 5570.560 - 5600.349: 0.2885% ( 1) 00:09:25.349 5600.349 - 5630.138: 0.3045% ( 2) 00:09:25.349 5630.138 - 5659.927: 0.3205% ( 2) 00:09:25.349 5659.927 - 5689.716: 0.3285% ( 1) 00:09:25.349 5689.716 - 5719.505: 0.3446% ( 2) 00:09:25.349 5719.505 - 5749.295: 0.3526% ( 1) 00:09:25.349 5749.295 - 5779.084: 0.3686% ( 2) 00:09:25.349 5779.084 - 5808.873: 0.3846% ( 2) 00:09:25.349 5808.873 - 5838.662: 0.3926% ( 1) 00:09:25.349 5838.662 - 5868.451: 0.4087% ( 2) 00:09:25.349 5898.240 - 5928.029: 0.4247% ( 2) 00:09:25.349 5928.029 - 5957.818: 0.4407% ( 2) 00:09:25.349 5957.818 - 5987.607: 0.4487% ( 1) 00:09:25.349 5987.607 - 6017.396: 0.4647% ( 2) 00:09:25.349 6017.396 - 6047.185: 0.4808% ( 2) 00:09:25.349 6047.185 - 6076.975: 0.4968% ( 2) 00:09:25.349 6076.975 - 6106.764: 0.5048% ( 1) 00:09:25.349 6106.764 - 6136.553: 0.5128% ( 1) 00:09:25.349 8221.789 - 8281.367: 0.5208% ( 1) 00:09:25.349 8281.367 - 8340.945: 0.5769% ( 7) 00:09:25.349 8340.945 - 8400.524: 0.6811% ( 13) 00:09:25.349 8400.524 - 8460.102: 0.9215% ( 30) 00:09:25.349 8460.102 - 8519.680: 1.3862% ( 58) 00:09:25.349 8519.680 - 8579.258: 2.0272% ( 80) 00:09:25.349 8579.258 - 8638.836: 2.8045% ( 97) 00:09:25.349 8638.836 - 8698.415: 3.7660% ( 120) 00:09:25.349 8698.415 - 8757.993: 4.8638% ( 137) 00:09:25.349 8757.993 - 8817.571: 6.0737% ( 151) 00:09:25.349 8817.571 - 8877.149: 7.3237% ( 156) 00:09:25.349 8877.149 - 8936.727: 8.6218% ( 162) 00:09:25.349 8936.727 - 8996.305: 9.9760% ( 169) 00:09:25.349 8996.305 - 9055.884: 11.4744% ( 187) 00:09:25.349 9055.884 - 9115.462: 13.1170% ( 205) 00:09:25.349 9115.462 - 9175.040: 15.0000% ( 235) 00:09:25.349 9175.040 - 9234.618: 16.9551% ( 244) 00:09:25.349 9234.618 - 9294.196: 18.8942% ( 242) 00:09:25.349 9294.196 - 9353.775: 20.9936% ( 262) 00:09:25.349 9353.775 - 9413.353: 23.1410% ( 268) 00:09:25.349 9413.353 - 9472.931: 25.4888% ( 293) 00:09:25.349 9472.931 - 9532.509: 27.7404% ( 281) 00:09:25.349 9532.509 - 9592.087: 29.8798% ( 267) 00:09:25.349 9592.087 - 9651.665: 32.2997% ( 302) 00:09:25.349 9651.665 - 9711.244: 34.9279% ( 328) 00:09:25.349 9711.244 - 9770.822: 37.7324% ( 350) 00:09:25.349 9770.822 - 9830.400: 40.8894% ( 394) 00:09:25.349 9830.400 - 9889.978: 44.1426% ( 406) 00:09:25.349 9889.978 - 9949.556: 47.7404% ( 449) 00:09:25.349 9949.556 - 10009.135: 51.0817% ( 417) 00:09:25.349 10009.135 - 10068.713: 54.1907% ( 388) 00:09:25.349 10068.713 - 10128.291: 57.3077% ( 389) 00:09:25.349 10128.291 - 10187.869: 60.1923% ( 360) 00:09:25.349 10187.869 - 10247.447: 63.0449% ( 356) 00:09:25.349 10247.447 - 10307.025: 65.6170% ( 321) 00:09:25.349 10307.025 - 10366.604: 68.2212% ( 325) 00:09:25.349 10366.604 - 10426.182: 70.6651% ( 305) 00:09:25.349 10426.182 - 10485.760: 73.0609% ( 299) 00:09:25.349 10485.760 - 10545.338: 75.2724% ( 276) 00:09:25.349 10545.338 - 10604.916: 77.2516% ( 247) 00:09:25.349 10604.916 - 10664.495: 79.0625% ( 226) 00:09:25.349 10664.495 - 10724.073: 80.6410% ( 197) 00:09:25.349 10724.073 - 10783.651: 82.1394% ( 187) 00:09:25.349 10783.651 - 10843.229: 83.3173% ( 147) 00:09:25.349 10843.229 - 10902.807: 84.2708% ( 119) 00:09:25.349 10902.807 - 10962.385: 85.1202% ( 106) 00:09:25.349 10962.385 - 11021.964: 85.8253% ( 88) 00:09:25.349 11021.964 - 11081.542: 86.5224% ( 87) 00:09:25.349 11081.542 - 11141.120: 87.2035% ( 85) 00:09:25.349 11141.120 - 11200.698: 87.8846% ( 85) 00:09:25.349 11200.698 - 11260.276: 88.5176% ( 79) 00:09:25.349 11260.276 - 11319.855: 89.0865% ( 71) 00:09:25.349 11319.855 - 11379.433: 89.4792% ( 49) 00:09:25.349 11379.433 - 11439.011: 89.8317% ( 44) 00:09:25.349 11439.011 - 11498.589: 90.2003% ( 46) 00:09:25.349 11498.589 - 11558.167: 90.5529% ( 44) 00:09:25.349 11558.167 - 11617.745: 90.9776% ( 53) 00:09:25.349 11617.745 - 11677.324: 91.4343% ( 57) 00:09:25.349 11677.324 - 11736.902: 91.8510% ( 52) 00:09:25.349 11736.902 - 11796.480: 92.2917% ( 55) 00:09:25.349 11796.480 - 11856.058: 92.6442% ( 44) 00:09:25.349 11856.058 - 11915.636: 93.1010% ( 57) 00:09:25.349 11915.636 - 11975.215: 93.4615% ( 45) 00:09:25.349 11975.215 - 12034.793: 93.7580% ( 37) 00:09:25.349 12034.793 - 12094.371: 94.0304% ( 34) 00:09:25.349 12094.371 - 12153.949: 94.3029% ( 34) 00:09:25.349 12153.949 - 12213.527: 94.6474% ( 43) 00:09:25.349 12213.527 - 12273.105: 95.0080% ( 45) 00:09:25.349 12273.105 - 12332.684: 95.3446% ( 42) 00:09:25.349 12332.684 - 12392.262: 95.6490% ( 38) 00:09:25.349 12392.262 - 12451.840: 95.9615% ( 39) 00:09:25.349 12451.840 - 12511.418: 96.2981% ( 42) 00:09:25.349 12511.418 - 12570.996: 96.5385% ( 30) 00:09:25.349 12570.996 - 12630.575: 96.7228% ( 23) 00:09:25.349 12630.575 - 12690.153: 96.8990% ( 22) 00:09:25.349 12690.153 - 12749.731: 97.0272% ( 16) 00:09:25.349 12749.731 - 12809.309: 97.1715% ( 18) 00:09:25.349 12809.309 - 12868.887: 97.3077% ( 17) 00:09:25.349 12868.887 - 12928.465: 97.4359% ( 16) 00:09:25.349 12928.465 - 12988.044: 97.6122% ( 22) 00:09:25.349 12988.044 - 13047.622: 97.7885% ( 22) 00:09:25.349 13047.622 - 13107.200: 97.9647% ( 22) 00:09:25.349 13107.200 - 13166.778: 98.0849% ( 15) 00:09:25.349 13166.778 - 13226.356: 98.1571% ( 9) 00:09:25.349 13226.356 - 13285.935: 98.2372% ( 10) 00:09:25.349 13285.935 - 13345.513: 98.3253% ( 11) 00:09:25.349 13345.513 - 13405.091: 98.3734% ( 6) 00:09:25.349 13405.091 - 13464.669: 98.4215% ( 6) 00:09:25.349 13464.669 - 13524.247: 98.4856% ( 8) 00:09:25.349 13524.247 - 13583.825: 98.5256% ( 5) 00:09:25.349 13583.825 - 13643.404: 98.5497% ( 3) 00:09:25.349 13643.404 - 13702.982: 98.5817% ( 4) 00:09:25.349 13702.982 - 13762.560: 98.6138% ( 4) 00:09:25.349 13762.560 - 13822.138: 98.6458% ( 4) 00:09:25.349 13822.138 - 13881.716: 98.6699% ( 3) 00:09:25.349 13881.716 - 13941.295: 98.7099% ( 5) 00:09:25.349 13941.295 - 14000.873: 98.7420% ( 4) 00:09:25.349 14000.873 - 14060.451: 98.7500% ( 1) 00:09:25.349 14060.451 - 14120.029: 98.7660% ( 2) 00:09:25.350 14120.029 - 14179.607: 98.7821% ( 2) 00:09:25.350 14179.607 - 14239.185: 98.7981% ( 2) 00:09:25.350 14239.185 - 14298.764: 98.8141% ( 2) 00:09:25.350 14298.764 - 14358.342: 98.8301% ( 2) 00:09:25.350 14358.342 - 14417.920: 98.8462% ( 2) 00:09:25.350 14417.920 - 14477.498: 98.8622% ( 2) 00:09:25.350 14477.498 - 14537.076: 98.8782% ( 2) 00:09:25.350 14537.076 - 14596.655: 98.9022% ( 3) 00:09:25.350 14596.655 - 14656.233: 98.9183% ( 2) 00:09:25.350 14656.233 - 14715.811: 98.9423% ( 3) 00:09:25.350 14715.811 - 14775.389: 98.9663% ( 3) 00:09:25.350 14775.389 - 14834.967: 98.9744% ( 1) 00:09:25.350 18707.549 - 18826.705: 99.0064% ( 4) 00:09:25.350 18826.705 - 18945.862: 99.0785% ( 9) 00:09:25.350 18945.862 - 19065.018: 99.1747% ( 12) 00:09:25.350 19065.018 - 19184.175: 99.2548% ( 10) 00:09:25.350 19184.175 - 19303.331: 99.2949% ( 5) 00:09:25.350 19303.331 - 19422.487: 99.3349% ( 5) 00:09:25.350 19422.487 - 19541.644: 99.3590% ( 3) 00:09:25.350 19541.644 - 19660.800: 99.3910% ( 4) 00:09:25.350 19660.800 - 19779.956: 99.4311% ( 5) 00:09:25.350 19779.956 - 19899.113: 99.4631% ( 4) 00:09:25.350 19899.113 - 20018.269: 99.4872% ( 3) 00:09:25.350 24188.742 - 24307.898: 99.5032% ( 2) 00:09:25.350 24307.898 - 24427.055: 99.5112% ( 1) 00:09:25.350 25022.836 - 25141.993: 99.5272% ( 2) 00:09:25.350 25141.993 - 25261.149: 99.5673% ( 5) 00:09:25.350 25261.149 - 25380.305: 99.6154% ( 6) 00:09:25.350 25380.305 - 25499.462: 99.6554% ( 5) 00:09:25.350 25499.462 - 25618.618: 99.6875% ( 4) 00:09:25.350 25618.618 - 25737.775: 99.7356% ( 6) 00:09:25.350 25737.775 - 25856.931: 99.7756% ( 5) 00:09:25.350 25856.931 - 25976.087: 99.8157% ( 5) 00:09:25.350 25976.087 - 26095.244: 99.8558% ( 5) 00:09:25.350 26095.244 - 26214.400: 99.9038% ( 6) 00:09:25.350 26214.400 - 26333.556: 99.9439% ( 5) 00:09:25.350 26333.556 - 26452.713: 99.9840% ( 5) 00:09:25.350 26452.713 - 26571.869: 100.0000% ( 2) 00:09:25.350 00:09:25.350 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:25.350 ============================================================================== 00:09:25.350 Range in us Cumulative IO count 00:09:25.350 4885.411 - 4915.200: 0.0321% ( 4) 00:09:25.350 4915.200 - 4944.989: 0.0801% ( 6) 00:09:25.350 4944.989 - 4974.778: 0.1362% ( 7) 00:09:25.350 4974.778 - 5004.567: 0.1923% ( 7) 00:09:25.350 5004.567 - 5034.356: 0.2404% ( 6) 00:09:25.350 5034.356 - 5064.145: 0.2484% ( 1) 00:09:25.350 5064.145 - 5093.935: 0.2644% ( 2) 00:09:25.350 5093.935 - 5123.724: 0.2804% ( 2) 00:09:25.350 5123.724 - 5153.513: 0.2965% ( 2) 00:09:25.350 5153.513 - 5183.302: 0.3205% ( 3) 00:09:25.350 5183.302 - 5213.091: 0.3285% ( 1) 00:09:25.350 5213.091 - 5242.880: 0.3446% ( 2) 00:09:25.350 5242.880 - 5272.669: 0.3606% ( 2) 00:09:25.350 5272.669 - 5302.458: 0.3766% ( 2) 00:09:25.350 5302.458 - 5332.247: 0.3846% ( 1) 00:09:25.350 5332.247 - 5362.036: 0.4006% ( 2) 00:09:25.350 5362.036 - 5391.825: 0.4167% ( 2) 00:09:25.350 5391.825 - 5421.615: 0.4327% ( 2) 00:09:25.350 5421.615 - 5451.404: 0.4407% ( 1) 00:09:25.350 5451.404 - 5481.193: 0.4487% ( 1) 00:09:25.350 5481.193 - 5510.982: 0.4567% ( 1) 00:09:25.350 5510.982 - 5540.771: 0.4728% ( 2) 00:09:25.350 5540.771 - 5570.560: 0.4968% ( 3) 00:09:25.350 5570.560 - 5600.349: 0.5048% ( 1) 00:09:25.350 5600.349 - 5630.138: 0.5128% ( 1) 00:09:25.350 8221.789 - 8281.367: 0.5208% ( 1) 00:09:25.350 8281.367 - 8340.945: 0.5449% ( 3) 00:09:25.350 8340.945 - 8400.524: 0.6651% ( 15) 00:09:25.350 8400.524 - 8460.102: 0.9615% ( 37) 00:09:25.350 8460.102 - 8519.680: 1.5545% ( 74) 00:09:25.350 8519.680 - 8579.258: 2.3718% ( 102) 00:09:25.350 8579.258 - 8638.836: 3.1971% ( 103) 00:09:25.350 8638.836 - 8698.415: 4.1346% ( 117) 00:09:25.350 8698.415 - 8757.993: 5.1923% ( 132) 00:09:25.350 8757.993 - 8817.571: 6.3622% ( 146) 00:09:25.350 8817.571 - 8877.149: 7.5401% ( 147) 00:09:25.350 8877.149 - 8936.727: 8.9744% ( 179) 00:09:25.350 8936.727 - 8996.305: 10.3285% ( 169) 00:09:25.350 8996.305 - 9055.884: 11.8189% ( 186) 00:09:25.350 9055.884 - 9115.462: 13.6699% ( 231) 00:09:25.350 9115.462 - 9175.040: 15.4567% ( 223) 00:09:25.350 9175.040 - 9234.618: 17.3558% ( 237) 00:09:25.350 9234.618 - 9294.196: 19.1426% ( 223) 00:09:25.350 9294.196 - 9353.775: 21.1458% ( 250) 00:09:25.350 9353.775 - 9413.353: 23.1571% ( 251) 00:09:25.350 9413.353 - 9472.931: 25.4327% ( 284) 00:09:25.350 9472.931 - 9532.509: 27.4279% ( 249) 00:09:25.350 9532.509 - 9592.087: 29.6154% ( 273) 00:09:25.350 9592.087 - 9651.665: 31.9872% ( 296) 00:09:25.350 9651.665 - 9711.244: 34.7115% ( 340) 00:09:25.350 9711.244 - 9770.822: 37.9647% ( 406) 00:09:25.350 9770.822 - 9830.400: 41.3301% ( 420) 00:09:25.350 9830.400 - 9889.978: 44.7356% ( 425) 00:09:25.350 9889.978 - 9949.556: 48.3894% ( 456) 00:09:25.350 9949.556 - 10009.135: 51.7308% ( 417) 00:09:25.350 10009.135 - 10068.713: 54.8478% ( 389) 00:09:25.350 10068.713 - 10128.291: 57.5321% ( 335) 00:09:25.350 10128.291 - 10187.869: 60.2965% ( 345) 00:09:25.350 10187.869 - 10247.447: 62.9087% ( 326) 00:09:25.350 10247.447 - 10307.025: 65.4487% ( 317) 00:09:25.350 10307.025 - 10366.604: 68.0689% ( 327) 00:09:25.350 10366.604 - 10426.182: 70.3526% ( 285) 00:09:25.350 10426.182 - 10485.760: 72.7965% ( 305) 00:09:25.350 10485.760 - 10545.338: 75.0881% ( 286) 00:09:25.350 10545.338 - 10604.916: 77.2596% ( 271) 00:09:25.350 10604.916 - 10664.495: 79.1426% ( 235) 00:09:25.350 10664.495 - 10724.073: 80.8734% ( 216) 00:09:25.350 10724.073 - 10783.651: 82.2837% ( 176) 00:09:25.350 10783.651 - 10843.229: 83.5176% ( 154) 00:09:25.350 10843.229 - 10902.807: 84.5112% ( 124) 00:09:25.350 10902.807 - 10962.385: 85.2404% ( 91) 00:09:25.350 10962.385 - 11021.964: 85.9135% ( 84) 00:09:25.350 11021.964 - 11081.542: 86.5304% ( 77) 00:09:25.350 11081.542 - 11141.120: 87.1715% ( 80) 00:09:25.350 11141.120 - 11200.698: 87.7244% ( 69) 00:09:25.350 11200.698 - 11260.276: 88.2612% ( 67) 00:09:25.350 11260.276 - 11319.855: 88.7340% ( 59) 00:09:25.350 11319.855 - 11379.433: 89.1987% ( 58) 00:09:25.350 11379.433 - 11439.011: 89.6394% ( 55) 00:09:25.350 11439.011 - 11498.589: 90.0160% ( 47) 00:09:25.350 11498.589 - 11558.167: 90.4006% ( 48) 00:09:25.350 11558.167 - 11617.745: 90.8654% ( 58) 00:09:25.350 11617.745 - 11677.324: 91.3221% ( 57) 00:09:25.350 11677.324 - 11736.902: 91.7788% ( 57) 00:09:25.350 11736.902 - 11796.480: 92.2196% ( 55) 00:09:25.350 11796.480 - 11856.058: 92.6843% ( 58) 00:09:25.350 11856.058 - 11915.636: 93.0288% ( 43) 00:09:25.350 11915.636 - 11975.215: 93.3654% ( 42) 00:09:25.350 11975.215 - 12034.793: 93.7099% ( 43) 00:09:25.350 12034.793 - 12094.371: 94.0625% ( 44) 00:09:25.350 12094.371 - 12153.949: 94.3510% ( 36) 00:09:25.350 12153.949 - 12213.527: 94.6715% ( 40) 00:09:25.350 12213.527 - 12273.105: 95.0721% ( 50) 00:09:25.350 12273.105 - 12332.684: 95.4968% ( 53) 00:09:25.350 12332.684 - 12392.262: 95.8574% ( 45) 00:09:25.350 12392.262 - 12451.840: 96.1779% ( 40) 00:09:25.350 12451.840 - 12511.418: 96.4503% ( 34) 00:09:25.350 12511.418 - 12570.996: 96.6747% ( 28) 00:09:25.350 12570.996 - 12630.575: 96.8830% ( 26) 00:09:25.350 12630.575 - 12690.153: 97.0433% ( 20) 00:09:25.350 12690.153 - 12749.731: 97.1875% ( 18) 00:09:25.350 12749.731 - 12809.309: 97.3237% ( 17) 00:09:25.350 12809.309 - 12868.887: 97.4519% ( 16) 00:09:25.350 12868.887 - 12928.465: 97.5721% ( 15) 00:09:25.350 12928.465 - 12988.044: 97.7003% ( 16) 00:09:25.350 12988.044 - 13047.622: 97.7965% ( 12) 00:09:25.350 13047.622 - 13107.200: 97.8846% ( 11) 00:09:25.350 13107.200 - 13166.778: 97.9487% ( 8) 00:09:25.350 13166.778 - 13226.356: 98.0128% ( 8) 00:09:25.350 13226.356 - 13285.935: 98.0689% ( 7) 00:09:25.350 13285.935 - 13345.513: 98.1250% ( 7) 00:09:25.350 13345.513 - 13405.091: 98.1651% ( 5) 00:09:25.351 13405.091 - 13464.669: 98.2131% ( 6) 00:09:25.351 13464.669 - 13524.247: 98.2612% ( 6) 00:09:25.351 13524.247 - 13583.825: 98.3093% ( 6) 00:09:25.351 13583.825 - 13643.404: 98.3974% ( 11) 00:09:25.351 13643.404 - 13702.982: 98.4615% ( 8) 00:09:25.351 13702.982 - 13762.560: 98.5577% ( 12) 00:09:25.351 13762.560 - 13822.138: 98.6058% ( 6) 00:09:25.351 13822.138 - 13881.716: 98.6619% ( 7) 00:09:25.351 13881.716 - 13941.295: 98.6939% ( 4) 00:09:25.351 13941.295 - 14000.873: 98.7340% ( 5) 00:09:25.351 14000.873 - 14060.451: 98.7660% ( 4) 00:09:25.351 14060.451 - 14120.029: 98.7821% ( 2) 00:09:25.351 14120.029 - 14179.607: 98.7901% ( 1) 00:09:25.351 14179.607 - 14239.185: 98.8061% ( 2) 00:09:25.351 14239.185 - 14298.764: 98.8141% ( 1) 00:09:25.351 14298.764 - 14358.342: 98.8301% ( 2) 00:09:25.351 14358.342 - 14417.920: 98.8462% ( 2) 00:09:25.351 14417.920 - 14477.498: 98.8622% ( 2) 00:09:25.351 14477.498 - 14537.076: 98.8782% ( 2) 00:09:25.351 14537.076 - 14596.655: 98.9022% ( 3) 00:09:25.351 14596.655 - 14656.233: 98.9183% ( 2) 00:09:25.351 14656.233 - 14715.811: 98.9423% ( 3) 00:09:25.351 14715.811 - 14775.389: 98.9663% ( 3) 00:09:25.351 14775.389 - 14834.967: 98.9744% ( 1) 00:09:25.351 17873.455 - 17992.611: 99.0224% ( 6) 00:09:25.351 17992.611 - 18111.767: 99.1186% ( 12) 00:09:25.351 18111.767 - 18230.924: 99.1506% ( 4) 00:09:25.351 18230.924 - 18350.080: 99.1827% ( 4) 00:09:25.351 18350.080 - 18469.236: 99.2147% ( 4) 00:09:25.351 18469.236 - 18588.393: 99.2548% ( 5) 00:09:25.351 18588.393 - 18707.549: 99.2949% ( 5) 00:09:25.351 18707.549 - 18826.705: 99.3349% ( 5) 00:09:25.351 18826.705 - 18945.862: 99.3830% ( 6) 00:09:25.351 18945.862 - 19065.018: 99.4071% ( 3) 00:09:25.351 19065.018 - 19184.175: 99.4551% ( 6) 00:09:25.351 19184.175 - 19303.331: 99.4872% ( 4) 00:09:25.351 24307.898 - 24427.055: 99.5353% ( 6) 00:09:25.351 24427.055 - 24546.211: 99.5753% ( 5) 00:09:25.351 24546.211 - 24665.367: 99.6234% ( 6) 00:09:25.351 24665.367 - 24784.524: 99.6715% ( 6) 00:09:25.351 24784.524 - 24903.680: 99.7196% ( 6) 00:09:25.351 24903.680 - 25022.836: 99.7596% ( 5) 00:09:25.351 25022.836 - 25141.993: 99.8077% ( 6) 00:09:25.351 25141.993 - 25261.149: 99.8478% ( 5) 00:09:25.351 25261.149 - 25380.305: 99.8958% ( 6) 00:09:25.351 25380.305 - 25499.462: 99.9439% ( 6) 00:09:25.351 25499.462 - 25618.618: 99.9920% ( 6) 00:09:25.351 25618.618 - 25737.775: 100.0000% ( 1) 00:09:25.351 00:09:25.351 02:56:11 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:25.351 00:09:25.351 real 0m2.605s 00:09:25.351 user 0m2.251s 00:09:25.351 sys 0m0.239s 00:09:25.351 02:56:11 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.351 02:56:11 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 ************************************ 00:09:25.351 END TEST nvme_perf 00:09:25.351 ************************************ 00:09:25.351 02:56:11 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:25.351 02:56:11 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:25.351 02:56:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.351 02:56:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 ************************************ 00:09:25.351 START TEST nvme_hello_world 00:09:25.351 ************************************ 00:09:25.351 02:56:11 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:25.610 Initializing NVMe Controllers 00:09:25.610 Attached to 0000:00:10.0 00:09:25.610 Namespace ID: 1 size: 6GB 00:09:25.610 Attached to 0000:00:11.0 00:09:25.610 Namespace ID: 1 size: 5GB 00:09:25.610 Attached to 0000:00:13.0 00:09:25.610 Namespace ID: 1 size: 1GB 00:09:25.610 Attached to 0000:00:12.0 00:09:25.610 Namespace ID: 1 size: 4GB 00:09:25.610 Namespace ID: 2 size: 4GB 00:09:25.610 Namespace ID: 3 size: 4GB 00:09:25.610 Initialization complete. 00:09:25.610 INFO: using host memory buffer for IO 00:09:25.610 Hello world! 00:09:25.610 INFO: using host memory buffer for IO 00:09:25.610 Hello world! 00:09:25.610 INFO: using host memory buffer for IO 00:09:25.610 Hello world! 00:09:25.610 INFO: using host memory buffer for IO 00:09:25.610 Hello world! 00:09:25.610 INFO: using host memory buffer for IO 00:09:25.610 Hello world! 00:09:25.610 INFO: using host memory buffer for IO 00:09:25.610 Hello world! 00:09:25.610 ************************************ 00:09:25.610 END TEST nvme_hello_world 00:09:25.610 ************************************ 00:09:25.610 00:09:25.610 real 0m0.271s 00:09:25.610 user 0m0.112s 00:09:25.610 sys 0m0.114s 00:09:25.610 02:56:11 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.610 02:56:11 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:25.610 02:56:11 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:25.610 02:56:11 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:25.610 02:56:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.610 02:56:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.610 ************************************ 00:09:25.610 START TEST nvme_sgl 00:09:25.610 ************************************ 00:09:25.610 02:56:11 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:25.869 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:25.869 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:25.869 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:25.869 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:25.869 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:25.869 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:25.869 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:25.869 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:25.869 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:25.869 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:25.869 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:25.869 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:25.869 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:25.869 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:25.869 NVMe Readv/Writev Request test 00:09:25.869 Attached to 0000:00:10.0 00:09:25.869 Attached to 0000:00:11.0 00:09:25.869 Attached to 0000:00:13.0 00:09:25.869 Attached to 0000:00:12.0 00:09:25.869 0000:00:10.0: build_io_request_2 test passed 00:09:25.869 0000:00:10.0: build_io_request_4 test passed 00:09:25.869 0000:00:10.0: build_io_request_5 test passed 00:09:25.869 0000:00:10.0: build_io_request_6 test passed 00:09:25.869 0000:00:10.0: build_io_request_7 test passed 00:09:25.869 0000:00:10.0: build_io_request_10 test passed 00:09:25.869 0000:00:11.0: build_io_request_2 test passed 00:09:25.869 0000:00:11.0: build_io_request_4 test passed 00:09:25.869 0000:00:11.0: build_io_request_5 test passed 00:09:25.869 0000:00:11.0: build_io_request_6 test passed 00:09:25.869 0000:00:11.0: build_io_request_7 test passed 00:09:25.869 0000:00:11.0: build_io_request_10 test passed 00:09:25.869 Cleaning up... 00:09:25.869 00:09:25.869 real 0m0.345s 00:09:25.869 user 0m0.178s 00:09:25.869 sys 0m0.122s 00:09:25.869 02:56:11 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.869 02:56:11 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:25.869 ************************************ 00:09:25.869 END TEST nvme_sgl 00:09:25.869 ************************************ 00:09:25.869 02:56:11 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:25.869 02:56:11 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:25.869 02:56:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.869 02:56:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.127 ************************************ 00:09:26.127 START TEST nvme_e2edp 00:09:26.127 ************************************ 00:09:26.127 02:56:11 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:26.386 NVMe Write/Read with End-to-End data protection test 00:09:26.386 Attached to 0000:00:10.0 00:09:26.386 Attached to 0000:00:11.0 00:09:26.386 Attached to 0000:00:13.0 00:09:26.386 Attached to 0000:00:12.0 00:09:26.386 Cleaning up... 00:09:26.386 00:09:26.386 real 0m0.271s 00:09:26.386 user 0m0.089s 00:09:26.386 sys 0m0.134s 00:09:26.386 02:56:12 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:26.386 ************************************ 00:09:26.386 END TEST nvme_e2edp 00:09:26.386 ************************************ 00:09:26.386 02:56:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:26.386 02:56:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:26.386 02:56:12 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:26.386 02:56:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:26.386 02:56:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.386 ************************************ 00:09:26.386 START TEST nvme_reserve 00:09:26.386 ************************************ 00:09:26.386 02:56:12 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:26.645 ===================================================== 00:09:26.645 NVMe Controller at PCI bus 0, device 16, function 0 00:09:26.645 ===================================================== 00:09:26.645 Reservations: Not Supported 00:09:26.645 ===================================================== 00:09:26.645 NVMe Controller at PCI bus 0, device 17, function 0 00:09:26.645 ===================================================== 00:09:26.645 Reservations: Not Supported 00:09:26.645 ===================================================== 00:09:26.645 NVMe Controller at PCI bus 0, device 19, function 0 00:09:26.645 ===================================================== 00:09:26.645 Reservations: Not Supported 00:09:26.645 ===================================================== 00:09:26.645 NVMe Controller at PCI bus 0, device 18, function 0 00:09:26.645 ===================================================== 00:09:26.645 Reservations: Not Supported 00:09:26.645 Reservation test passed 00:09:26.645 00:09:26.645 real 0m0.265s 00:09:26.645 user 0m0.099s 00:09:26.645 sys 0m0.117s 00:09:26.645 ************************************ 00:09:26.645 END TEST nvme_reserve 00:09:26.645 ************************************ 00:09:26.645 02:56:12 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:26.645 02:56:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:26.645 02:56:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:26.645 02:56:12 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:26.645 02:56:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:26.645 02:56:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.645 ************************************ 00:09:26.645 START TEST nvme_err_injection 00:09:26.645 ************************************ 00:09:26.645 02:56:12 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:26.904 NVMe Error Injection test 00:09:26.904 Attached to 0000:00:10.0 00:09:26.904 Attached to 0000:00:11.0 00:09:26.904 Attached to 0000:00:13.0 00:09:26.904 Attached to 0000:00:12.0 00:09:26.904 0000:00:11.0: get features failed as expected 00:09:26.904 0000:00:13.0: get features failed as expected 00:09:26.904 0000:00:12.0: get features failed as expected 00:09:26.904 0000:00:10.0: get features failed as expected 00:09:26.904 0000:00:11.0: get features successfully as expected 00:09:26.904 0000:00:13.0: get features successfully as expected 00:09:26.904 0000:00:12.0: get features successfully as expected 00:09:26.904 0000:00:10.0: get features successfully as expected 00:09:26.904 0000:00:10.0: read failed as expected 00:09:26.904 0000:00:11.0: read failed as expected 00:09:26.904 0000:00:13.0: read failed as expected 00:09:26.904 0000:00:12.0: read failed as expected 00:09:26.904 0000:00:10.0: read successfully as expected 00:09:26.904 0000:00:11.0: read successfully as expected 00:09:26.904 0000:00:13.0: read successfully as expected 00:09:26.904 0000:00:12.0: read successfully as expected 00:09:26.904 Cleaning up... 00:09:26.904 ************************************ 00:09:26.904 END TEST nvme_err_injection 00:09:26.904 ************************************ 00:09:26.904 00:09:26.904 real 0m0.270s 00:09:26.904 user 0m0.098s 00:09:26.904 sys 0m0.131s 00:09:26.904 02:56:12 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:26.904 02:56:12 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:26.904 02:56:12 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:26.904 02:56:12 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:26.904 02:56:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:26.904 02:56:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.904 ************************************ 00:09:26.904 START TEST nvme_overhead 00:09:26.904 ************************************ 00:09:26.904 02:56:12 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:28.282 Initializing NVMe Controllers 00:09:28.282 Attached to 0000:00:10.0 00:09:28.282 Attached to 0000:00:11.0 00:09:28.282 Attached to 0000:00:13.0 00:09:28.282 Attached to 0000:00:12.0 00:09:28.282 Initialization complete. Launching workers. 00:09:28.282 submit (in ns) avg, min, max = 17106.8, 13766.4, 113589.1 00:09:28.282 complete (in ns) avg, min, max = 11820.9, 8827.7, 99580.0 00:09:28.282 00:09:28.282 Submit histogram 00:09:28.282 ================ 00:09:28.282 Range in us Cumulative Count 00:09:28.282 13.731 - 13.789: 0.0116% ( 1) 00:09:28.282 13.789 - 13.847: 0.0232% ( 1) 00:09:28.282 13.847 - 13.905: 0.0697% ( 4) 00:09:28.282 13.905 - 13.964: 0.3484% ( 24) 00:09:28.282 13.964 - 14.022: 1.3587% ( 87) 00:09:28.282 14.022 - 14.080: 3.0658% ( 147) 00:09:28.282 14.080 - 14.138: 5.6672% ( 224) 00:09:28.282 14.138 - 14.196: 7.5949% ( 166) 00:09:28.282 14.196 - 14.255: 9.0930% ( 129) 00:09:28.282 14.255 - 14.313: 10.3705% ( 110) 00:09:28.282 14.313 - 14.371: 11.7292% ( 117) 00:09:28.282 14.371 - 14.429: 14.9228% ( 275) 00:09:28.282 14.429 - 14.487: 19.4635% ( 391) 00:09:28.282 14.487 - 14.545: 24.8055% ( 460) 00:09:28.282 14.545 - 14.604: 29.0094% ( 362) 00:09:28.282 14.604 - 14.662: 31.8894% ( 248) 00:09:28.282 14.662 - 14.720: 33.4804% ( 137) 00:09:28.282 14.720 - 14.778: 35.0714% ( 137) 00:09:28.282 14.778 - 14.836: 37.0340% ( 169) 00:09:28.282 14.836 - 14.895: 39.5308% ( 215) 00:09:28.282 14.895 - 15.011: 45.3141% ( 498) 00:09:28.282 15.011 - 15.127: 49.2974% ( 343) 00:09:28.282 15.127 - 15.244: 51.7362% ( 210) 00:09:28.282 15.244 - 15.360: 52.9671% ( 106) 00:09:28.282 15.360 - 15.476: 53.8613% ( 77) 00:09:28.282 15.476 - 15.593: 54.5117% ( 56) 00:09:28.282 15.593 - 15.709: 55.0691% ( 48) 00:09:28.282 15.709 - 15.825: 55.4523% ( 33) 00:09:28.282 15.825 - 15.942: 55.6265% ( 15) 00:09:28.282 15.942 - 16.058: 55.7891% ( 14) 00:09:28.282 16.058 - 16.175: 55.9169% ( 11) 00:09:28.282 16.175 - 16.291: 56.0446% ( 11) 00:09:28.282 16.291 - 16.407: 56.1259% ( 7) 00:09:28.282 16.407 - 16.524: 56.2072% ( 7) 00:09:28.282 16.524 - 16.640: 56.2769% ( 6) 00:09:28.282 16.640 - 16.756: 56.3001% ( 2) 00:09:28.282 16.756 - 16.873: 56.3117% ( 1) 00:09:28.282 16.989 - 17.105: 56.3349% ( 2) 00:09:28.282 17.105 - 17.222: 56.3581% ( 2) 00:09:28.282 17.222 - 17.338: 56.3930% ( 3) 00:09:28.282 17.338 - 17.455: 57.3917% ( 86) 00:09:28.282 17.455 - 17.571: 62.6060% ( 449) 00:09:28.283 17.571 - 17.687: 71.0835% ( 730) 00:09:28.283 17.687 - 17.804: 77.4823% ( 551) 00:09:28.283 17.804 - 17.920: 80.0372% ( 220) 00:09:28.283 17.920 - 18.036: 81.6746% ( 141) 00:09:28.283 18.036 - 18.153: 83.0217% ( 116) 00:09:28.283 18.153 - 18.269: 84.4269% ( 121) 00:09:28.283 18.269 - 18.385: 85.6230% ( 103) 00:09:28.283 18.385 - 18.502: 86.2385% ( 53) 00:09:28.283 18.502 - 18.618: 86.6798% ( 38) 00:09:28.283 18.618 - 18.735: 86.9934% ( 27) 00:09:28.283 18.735 - 18.851: 87.2140% ( 19) 00:09:28.283 18.851 - 18.967: 87.3998% ( 16) 00:09:28.283 18.967 - 19.084: 87.5044% ( 9) 00:09:28.283 19.084 - 19.200: 87.6553% ( 13) 00:09:28.283 19.200 - 19.316: 87.8644% ( 18) 00:09:28.283 19.316 - 19.433: 88.0269% ( 14) 00:09:28.283 19.433 - 19.549: 88.1663% ( 12) 00:09:28.283 19.549 - 19.665: 88.3289% ( 14) 00:09:28.283 19.665 - 19.782: 88.4682% ( 12) 00:09:28.283 19.782 - 19.898: 88.5611% ( 8) 00:09:28.283 19.898 - 20.015: 88.7237% ( 14) 00:09:28.283 20.015 - 20.131: 88.8863% ( 14) 00:09:28.283 20.131 - 20.247: 89.0141% ( 11) 00:09:28.283 20.247 - 20.364: 89.1766% ( 14) 00:09:28.283 20.364 - 20.480: 89.3508% ( 15) 00:09:28.283 20.480 - 20.596: 89.4902% ( 12) 00:09:28.283 20.596 - 20.713: 89.6412% ( 13) 00:09:28.283 20.713 - 20.829: 89.7921% ( 13) 00:09:28.283 20.829 - 20.945: 90.0476% ( 22) 00:09:28.283 20.945 - 21.062: 90.1521% ( 9) 00:09:28.283 21.062 - 21.178: 90.2915% ( 12) 00:09:28.283 21.178 - 21.295: 90.3844% ( 8) 00:09:28.283 21.295 - 21.411: 90.4889% ( 9) 00:09:28.283 21.411 - 21.527: 90.6283% ( 12) 00:09:28.283 21.527 - 21.644: 90.6979% ( 6) 00:09:28.283 21.644 - 21.760: 90.7908% ( 8) 00:09:28.283 21.760 - 21.876: 90.8721% ( 7) 00:09:28.283 21.876 - 21.993: 90.9534% ( 7) 00:09:28.283 21.993 - 22.109: 91.0928% ( 12) 00:09:28.283 22.109 - 22.225: 91.1857% ( 8) 00:09:28.283 22.225 - 22.342: 91.2902% ( 9) 00:09:28.283 22.342 - 22.458: 91.3831% ( 8) 00:09:28.283 22.458 - 22.575: 91.4760% ( 8) 00:09:28.283 22.575 - 22.691: 91.5573% ( 7) 00:09:28.283 22.691 - 22.807: 91.6038% ( 4) 00:09:28.283 22.807 - 22.924: 91.6851% ( 7) 00:09:28.283 22.924 - 23.040: 91.8012% ( 10) 00:09:28.283 23.040 - 23.156: 91.8709% ( 6) 00:09:28.283 23.156 - 23.273: 91.9405% ( 6) 00:09:28.283 23.273 - 23.389: 92.0567% ( 10) 00:09:28.283 23.389 - 23.505: 92.1380% ( 7) 00:09:28.283 23.505 - 23.622: 92.2193% ( 7) 00:09:28.283 23.622 - 23.738: 92.2889% ( 6) 00:09:28.283 23.738 - 23.855: 92.3470% ( 5) 00:09:28.283 23.855 - 23.971: 92.4283% ( 7) 00:09:28.283 23.971 - 24.087: 92.5560% ( 11) 00:09:28.283 24.087 - 24.204: 92.6141% ( 5) 00:09:28.283 24.204 - 24.320: 92.6838% ( 6) 00:09:28.283 24.320 - 24.436: 92.8231% ( 12) 00:09:28.283 24.436 - 24.553: 92.9160% ( 8) 00:09:28.283 24.553 - 24.669: 92.9973% ( 7) 00:09:28.283 24.669 - 24.785: 93.0206% ( 2) 00:09:28.283 24.785 - 24.902: 93.1135% ( 8) 00:09:28.283 24.902 - 25.018: 93.1483% ( 3) 00:09:28.283 25.018 - 25.135: 93.1948% ( 4) 00:09:28.283 25.135 - 25.251: 93.2064% ( 1) 00:09:28.283 25.251 - 25.367: 93.3689% ( 14) 00:09:28.283 25.367 - 25.484: 93.4270% ( 5) 00:09:28.283 25.484 - 25.600: 93.4502% ( 2) 00:09:28.283 25.600 - 25.716: 93.4967% ( 4) 00:09:28.283 25.716 - 25.833: 93.5548% ( 5) 00:09:28.283 25.833 - 25.949: 93.5780% ( 2) 00:09:28.283 25.949 - 26.065: 93.6244% ( 4) 00:09:28.283 26.065 - 26.182: 93.6593% ( 3) 00:09:28.283 26.182 - 26.298: 93.7173% ( 5) 00:09:28.283 26.298 - 26.415: 93.7754% ( 5) 00:09:28.283 26.415 - 26.531: 93.8335% ( 5) 00:09:28.283 26.531 - 26.647: 93.9031% ( 6) 00:09:28.283 26.647 - 26.764: 93.9612% ( 5) 00:09:28.283 26.764 - 26.880: 94.0077% ( 4) 00:09:28.283 26.880 - 26.996: 94.0309% ( 2) 00:09:28.283 26.996 - 27.113: 94.1006% ( 6) 00:09:28.283 27.113 - 27.229: 94.1354% ( 3) 00:09:28.283 27.229 - 27.345: 94.2399% ( 9) 00:09:28.283 27.345 - 27.462: 94.2980% ( 5) 00:09:28.283 27.462 - 27.578: 94.3909% ( 8) 00:09:28.283 27.578 - 27.695: 94.4257% ( 3) 00:09:28.283 27.811 - 27.927: 94.4722% ( 4) 00:09:28.283 27.927 - 28.044: 94.5419% ( 6) 00:09:28.283 28.044 - 28.160: 94.6115% ( 6) 00:09:28.283 28.160 - 28.276: 94.6580% ( 4) 00:09:28.283 28.276 - 28.393: 94.6696% ( 1) 00:09:28.283 28.393 - 28.509: 94.6812% ( 1) 00:09:28.283 28.509 - 28.625: 94.7625% ( 7) 00:09:28.283 28.625 - 28.742: 94.9715% ( 18) 00:09:28.283 28.742 - 28.858: 95.3083% ( 29) 00:09:28.283 28.858 - 28.975: 95.8309% ( 45) 00:09:28.283 28.975 - 29.091: 96.2954% ( 40) 00:09:28.283 29.091 - 29.207: 96.7019% ( 35) 00:09:28.283 29.207 - 29.324: 97.0619% ( 31) 00:09:28.283 29.324 - 29.440: 97.3755% ( 27) 00:09:28.283 29.440 - 29.556: 97.7819% ( 35) 00:09:28.283 29.556 - 29.673: 98.0838% ( 26) 00:09:28.283 29.673 - 29.789: 98.2348% ( 13) 00:09:28.283 29.789 - 30.022: 98.4555% ( 19) 00:09:28.283 30.022 - 30.255: 98.5368% ( 7) 00:09:28.283 30.255 - 30.487: 98.6529% ( 10) 00:09:28.283 30.487 - 30.720: 98.7342% ( 7) 00:09:28.283 30.720 - 30.953: 98.8155% ( 7) 00:09:28.283 30.953 - 31.185: 98.8619% ( 4) 00:09:28.283 31.185 - 31.418: 98.9664% ( 9) 00:09:28.283 31.418 - 31.651: 99.0361% ( 6) 00:09:28.283 31.651 - 31.884: 99.0477% ( 1) 00:09:28.283 31.884 - 32.116: 99.0826% ( 3) 00:09:28.283 32.116 - 32.349: 99.1174% ( 3) 00:09:28.283 32.349 - 32.582: 99.1406% ( 2) 00:09:28.283 32.582 - 32.815: 99.1755% ( 3) 00:09:28.283 33.047 - 33.280: 99.1871% ( 1) 00:09:28.283 33.280 - 33.513: 99.1987% ( 1) 00:09:28.283 33.745 - 33.978: 99.2219% ( 2) 00:09:28.283 33.978 - 34.211: 99.2452% ( 2) 00:09:28.283 34.211 - 34.444: 99.2684% ( 2) 00:09:28.283 34.444 - 34.676: 99.2916% ( 2) 00:09:28.283 34.676 - 34.909: 99.3613% ( 6) 00:09:28.283 34.909 - 35.142: 99.4077% ( 4) 00:09:28.283 35.142 - 35.375: 99.4310% ( 2) 00:09:28.283 35.375 - 35.607: 99.4890% ( 5) 00:09:28.283 35.840 - 36.073: 99.5355% ( 4) 00:09:28.283 36.073 - 36.305: 99.5587% ( 2) 00:09:28.283 36.538 - 36.771: 99.5819% ( 2) 00:09:28.283 36.771 - 37.004: 99.6052% ( 2) 00:09:28.283 37.004 - 37.236: 99.6632% ( 5) 00:09:28.283 37.236 - 37.469: 99.6748% ( 1) 00:09:28.283 37.469 - 37.702: 99.6981% ( 2) 00:09:28.283 37.935 - 38.167: 99.7097% ( 1) 00:09:28.283 38.167 - 38.400: 99.7329% ( 2) 00:09:28.283 38.400 - 38.633: 99.7677% ( 3) 00:09:28.283 38.633 - 38.865: 99.7794% ( 1) 00:09:28.283 40.029 - 40.262: 99.7910% ( 1) 00:09:28.283 40.495 - 40.727: 99.8142% ( 2) 00:09:28.283 41.193 - 41.425: 99.8374% ( 2) 00:09:28.283 41.891 - 42.124: 99.8490% ( 1) 00:09:28.283 42.822 - 43.055: 99.8606% ( 1) 00:09:28.283 43.985 - 44.218: 99.8723% ( 1) 00:09:28.283 44.451 - 44.684: 99.8839% ( 1) 00:09:28.283 45.149 - 45.382: 99.8955% ( 1) 00:09:28.283 45.615 - 45.847: 99.9187% ( 2) 00:09:28.283 45.847 - 46.080: 99.9303% ( 1) 00:09:28.283 51.665 - 51.898: 99.9419% ( 1) 00:09:28.283 52.829 - 53.062: 99.9535% ( 1) 00:09:28.283 86.109 - 86.575: 99.9652% ( 1) 00:09:28.283 90.764 - 91.229: 99.9768% ( 1) 00:09:28.283 93.091 - 93.556: 99.9884% ( 1) 00:09:28.283 113.571 - 114.036: 100.0000% ( 1) 00:09:28.283 00:09:28.283 Complete histogram 00:09:28.283 ================== 00:09:28.283 Range in us Cumulative Count 00:09:28.283 8.785 - 8.844: 0.0232% ( 2) 00:09:28.283 8.844 - 8.902: 0.2206% ( 17) 00:09:28.283 8.902 - 8.960: 0.8942% ( 58) 00:09:28.283 8.960 - 9.018: 1.8813% ( 85) 00:09:28.283 9.018 - 9.076: 2.8104% ( 80) 00:09:28.283 9.076 - 9.135: 3.3794% ( 49) 00:09:28.283 9.135 - 9.193: 4.4594% ( 93) 00:09:28.283 9.193 - 9.251: 6.7240% ( 195) 00:09:28.283 9.251 - 9.309: 11.2879% ( 393) 00:09:28.283 9.309 - 9.367: 16.5602% ( 454) 00:09:28.283 9.367 - 9.425: 20.4157% ( 332) 00:09:28.283 9.425 - 9.484: 22.8196% ( 207) 00:09:28.283 9.484 - 9.542: 25.3048% ( 214) 00:09:28.283 9.542 - 9.600: 29.3926% ( 352) 00:09:28.283 9.600 - 9.658: 35.5708% ( 532) 00:09:28.283 9.658 - 9.716: 40.1463% ( 394) 00:09:28.283 9.716 - 9.775: 42.8986% ( 237) 00:09:28.283 9.775 - 9.833: 44.3851% ( 128) 00:09:28.283 9.833 - 9.891: 45.3954% ( 87) 00:09:28.283 9.891 - 9.949: 46.4871% ( 94) 00:09:28.283 9.949 - 10.007: 47.8690% ( 119) 00:09:28.283 10.007 - 10.065: 49.0303% ( 100) 00:09:28.283 10.065 - 10.124: 49.9477% ( 79) 00:09:28.283 10.124 - 10.182: 50.7490% ( 69) 00:09:28.283 10.182 - 10.240: 51.4226% ( 58) 00:09:28.283 10.240 - 10.298: 51.8987% ( 41) 00:09:28.283 10.298 - 10.356: 52.3052% ( 35) 00:09:28.283 10.356 - 10.415: 52.6652% ( 31) 00:09:28.284 10.415 - 10.473: 52.9439% ( 24) 00:09:28.284 10.473 - 10.531: 53.1878% ( 21) 00:09:28.284 10.531 - 10.589: 53.3504% ( 14) 00:09:28.284 10.589 - 10.647: 53.5826% ( 20) 00:09:28.284 10.647 - 10.705: 53.6988% ( 10) 00:09:28.284 10.705 - 10.764: 53.7568% ( 5) 00:09:28.284 10.764 - 10.822: 53.7917% ( 3) 00:09:28.284 10.822 - 10.880: 53.8381% ( 4) 00:09:28.284 10.880 - 10.938: 53.8497% ( 1) 00:09:28.284 10.938 - 10.996: 53.8613% ( 1) 00:09:28.284 10.996 - 11.055: 53.8846% ( 2) 00:09:28.284 11.055 - 11.113: 53.8962% ( 1) 00:09:28.284 11.113 - 11.171: 53.9310% ( 3) 00:09:28.284 11.171 - 11.229: 54.0007% ( 6) 00:09:28.284 11.229 - 11.287: 54.1052% ( 9) 00:09:28.284 11.287 - 11.345: 54.1865% ( 7) 00:09:28.284 11.345 - 11.404: 54.3259% ( 12) 00:09:28.284 11.404 - 11.462: 54.7439% ( 36) 00:09:28.284 11.462 - 11.520: 56.2769% ( 132) 00:09:28.284 11.520 - 11.578: 59.7956% ( 303) 00:09:28.284 11.578 - 11.636: 65.5557% ( 496) 00:09:28.284 11.636 - 11.695: 72.1984% ( 572) 00:09:28.284 11.695 - 11.753: 76.4255% ( 364) 00:09:28.284 11.753 - 11.811: 79.2126% ( 240) 00:09:28.284 11.811 - 11.869: 80.4785% ( 109) 00:09:28.284 11.869 - 11.927: 81.2333% ( 65) 00:09:28.284 11.927 - 11.985: 81.7675% ( 46) 00:09:28.284 11.985 - 12.044: 82.2669% ( 43) 00:09:28.284 12.044 - 12.102: 82.5572% ( 25) 00:09:28.284 12.102 - 12.160: 82.7314% ( 15) 00:09:28.284 12.160 - 12.218: 82.9520% ( 19) 00:09:28.284 12.218 - 12.276: 83.1843% ( 20) 00:09:28.284 12.276 - 12.335: 83.5211% ( 29) 00:09:28.284 12.335 - 12.393: 84.1482% ( 54) 00:09:28.284 12.393 - 12.451: 84.7288% ( 50) 00:09:28.284 12.451 - 12.509: 85.5417% ( 70) 00:09:28.284 12.509 - 12.567: 86.2850% ( 64) 00:09:28.284 12.567 - 12.625: 86.8656% ( 50) 00:09:28.284 12.625 - 12.684: 87.2837% ( 36) 00:09:28.284 12.684 - 12.742: 87.5973% ( 27) 00:09:28.284 12.742 - 12.800: 87.7947% ( 17) 00:09:28.284 12.800 - 12.858: 87.9224% ( 11) 00:09:28.284 12.858 - 12.916: 88.0966% ( 15) 00:09:28.284 12.916 - 12.975: 88.1547% ( 5) 00:09:28.284 12.975 - 13.033: 88.1895% ( 3) 00:09:28.284 13.033 - 13.091: 88.2360% ( 4) 00:09:28.284 13.091 - 13.149: 88.2824% ( 4) 00:09:28.284 13.149 - 13.207: 88.3057% ( 2) 00:09:28.284 13.207 - 13.265: 88.3521% ( 4) 00:09:28.284 13.265 - 13.324: 88.3753% ( 2) 00:09:28.284 13.324 - 13.382: 88.4102% ( 3) 00:09:28.284 13.382 - 13.440: 88.4566% ( 4) 00:09:28.284 13.440 - 13.498: 88.5263% ( 6) 00:09:28.284 13.498 - 13.556: 88.5495% ( 2) 00:09:28.284 13.556 - 13.615: 88.5728% ( 2) 00:09:28.284 13.615 - 13.673: 88.5960% ( 2) 00:09:28.284 13.673 - 13.731: 88.6308% ( 3) 00:09:28.284 13.731 - 13.789: 88.6540% ( 2) 00:09:28.284 13.789 - 13.847: 88.7005% ( 4) 00:09:28.284 13.847 - 13.905: 88.7121% ( 1) 00:09:28.284 13.905 - 13.964: 88.7470% ( 3) 00:09:28.284 13.964 - 14.022: 88.7702% ( 2) 00:09:28.284 14.022 - 14.080: 88.8050% ( 3) 00:09:28.284 14.080 - 14.138: 88.8747% ( 6) 00:09:28.284 14.138 - 14.196: 88.9908% ( 10) 00:09:28.284 14.196 - 14.255: 89.0024% ( 1) 00:09:28.284 14.255 - 14.313: 89.0141% ( 1) 00:09:28.284 14.313 - 14.371: 89.0489% ( 3) 00:09:28.284 14.371 - 14.429: 89.0721% ( 2) 00:09:28.284 14.429 - 14.487: 89.1302% ( 5) 00:09:28.284 14.487 - 14.545: 89.2115% ( 7) 00:09:28.284 14.545 - 14.604: 89.2695% ( 5) 00:09:28.284 14.604 - 14.662: 89.3276% ( 5) 00:09:28.284 14.662 - 14.720: 89.3857% ( 5) 00:09:28.284 14.720 - 14.778: 89.4670% ( 7) 00:09:28.284 14.778 - 14.836: 89.5599% ( 8) 00:09:28.284 14.836 - 14.895: 89.5947% ( 3) 00:09:28.284 14.895 - 15.011: 89.6760% ( 7) 00:09:28.284 15.011 - 15.127: 89.6992% ( 2) 00:09:28.284 15.127 - 15.244: 89.7689% ( 6) 00:09:28.284 15.244 - 15.360: 89.8618% ( 8) 00:09:28.284 15.360 - 15.476: 89.8966% ( 3) 00:09:28.284 15.476 - 15.593: 89.9315% ( 3) 00:09:28.284 15.593 - 15.709: 90.0012% ( 6) 00:09:28.284 15.709 - 15.825: 90.1173% ( 10) 00:09:28.284 15.825 - 15.942: 90.1521% ( 3) 00:09:28.284 15.942 - 16.058: 90.2218% ( 6) 00:09:28.284 16.058 - 16.175: 90.3612% ( 12) 00:09:28.284 16.175 - 16.291: 90.4541% ( 8) 00:09:28.284 16.291 - 16.407: 90.5586% ( 9) 00:09:28.284 16.407 - 16.524: 90.6399% ( 7) 00:09:28.284 16.524 - 16.640: 90.7328% ( 8) 00:09:28.284 16.640 - 16.756: 90.7908% ( 5) 00:09:28.284 16.756 - 16.873: 90.8605% ( 6) 00:09:28.284 16.873 - 16.989: 90.8838% ( 2) 00:09:28.284 16.989 - 17.105: 90.9418% ( 5) 00:09:28.284 17.105 - 17.222: 91.0347% ( 8) 00:09:28.284 17.222 - 17.338: 91.0579% ( 2) 00:09:28.284 17.338 - 17.455: 91.1276% ( 6) 00:09:28.284 17.455 - 17.571: 91.1741% ( 4) 00:09:28.284 17.571 - 17.687: 91.2205% ( 4) 00:09:28.284 17.687 - 17.804: 91.2902% ( 6) 00:09:28.284 17.804 - 17.920: 91.3599% ( 6) 00:09:28.284 17.920 - 18.036: 91.4296% ( 6) 00:09:28.284 18.036 - 18.153: 91.4644% ( 3) 00:09:28.284 18.153 - 18.269: 91.4760% ( 1) 00:09:28.284 18.269 - 18.385: 91.5109% ( 3) 00:09:28.284 18.385 - 18.502: 91.5457% ( 3) 00:09:28.284 18.502 - 18.618: 91.6038% ( 5) 00:09:28.284 18.618 - 18.735: 91.6502% ( 4) 00:09:28.284 18.735 - 18.851: 91.6618% ( 1) 00:09:28.284 18.851 - 18.967: 91.6851% ( 2) 00:09:28.284 18.967 - 19.084: 91.7315% ( 4) 00:09:28.284 19.084 - 19.200: 91.7896% ( 5) 00:09:28.284 19.200 - 19.316: 91.8476% ( 5) 00:09:28.284 19.316 - 19.433: 91.8709% ( 2) 00:09:28.284 19.433 - 19.549: 91.9057% ( 3) 00:09:28.284 19.549 - 19.665: 91.9754% ( 6) 00:09:28.284 19.665 - 19.782: 92.0218% ( 4) 00:09:28.284 19.782 - 19.898: 92.0915% ( 6) 00:09:28.284 19.898 - 20.015: 92.1264% ( 3) 00:09:28.284 20.131 - 20.247: 92.1844% ( 5) 00:09:28.284 20.247 - 20.364: 92.2309% ( 4) 00:09:28.284 20.364 - 20.480: 92.2657% ( 3) 00:09:28.284 20.480 - 20.596: 92.3586% ( 8) 00:09:28.284 20.596 - 20.713: 92.3818% ( 2) 00:09:28.284 20.713 - 20.829: 92.4167% ( 3) 00:09:28.284 20.829 - 20.945: 92.4399% ( 2) 00:09:28.284 20.945 - 21.062: 92.4864% ( 4) 00:09:28.284 21.062 - 21.178: 92.5096% ( 2) 00:09:28.284 21.178 - 21.295: 92.5793% ( 6) 00:09:28.284 21.295 - 21.411: 92.6257% ( 4) 00:09:28.284 21.411 - 21.527: 92.6373% ( 1) 00:09:28.284 21.527 - 21.644: 92.6722% ( 3) 00:09:28.284 21.644 - 21.760: 92.7302% ( 5) 00:09:28.284 21.876 - 21.993: 92.7418% ( 1) 00:09:28.284 21.993 - 22.109: 92.7883% ( 4) 00:09:28.284 22.109 - 22.225: 92.8231% ( 3) 00:09:28.284 22.225 - 22.342: 92.8464% ( 2) 00:09:28.284 22.342 - 22.458: 92.9044% ( 5) 00:09:28.284 22.458 - 22.575: 92.9160% ( 1) 00:09:28.284 22.691 - 22.807: 92.9741% ( 5) 00:09:28.284 22.807 - 22.924: 93.0206% ( 4) 00:09:28.284 23.156 - 23.273: 93.0438% ( 2) 00:09:28.284 23.273 - 23.389: 93.0554% ( 1) 00:09:28.284 23.389 - 23.505: 93.0786% ( 2) 00:09:28.284 23.505 - 23.622: 93.2412% ( 14) 00:09:28.284 23.622 - 23.738: 93.6477% ( 35) 00:09:28.284 23.738 - 23.855: 94.2515% ( 52) 00:09:28.284 23.855 - 23.971: 94.9599% ( 61) 00:09:28.284 23.971 - 24.087: 95.5754% ( 53) 00:09:28.284 24.087 - 24.204: 96.2606% ( 59) 00:09:28.284 24.204 - 24.320: 96.8761% ( 53) 00:09:28.284 24.320 - 24.436: 97.3174% ( 38) 00:09:28.284 24.436 - 24.553: 97.6774% ( 31) 00:09:28.284 24.553 - 24.669: 98.0606% ( 33) 00:09:28.284 24.669 - 24.785: 98.3045% ( 21) 00:09:28.284 24.785 - 24.902: 98.4671% ( 14) 00:09:28.284 24.902 - 25.018: 98.6064% ( 12) 00:09:28.284 25.018 - 25.135: 98.6529% ( 4) 00:09:28.284 25.135 - 25.251: 98.6993% ( 4) 00:09:28.284 25.251 - 25.367: 98.7458% ( 4) 00:09:28.284 25.367 - 25.484: 98.7574% ( 1) 00:09:28.284 25.484 - 25.600: 98.8271% ( 6) 00:09:28.284 25.600 - 25.716: 98.8387% ( 1) 00:09:28.284 25.716 - 25.833: 98.9316% ( 8) 00:09:28.284 25.833 - 25.949: 99.0129% ( 7) 00:09:28.284 25.949 - 26.065: 99.0593% ( 4) 00:09:28.284 26.065 - 26.182: 99.0942% ( 3) 00:09:28.284 26.182 - 26.298: 99.1174% ( 2) 00:09:28.284 26.298 - 26.415: 99.1290% ( 1) 00:09:28.285 26.415 - 26.531: 99.1755% ( 4) 00:09:28.285 26.531 - 26.647: 99.2219% ( 4) 00:09:28.285 26.996 - 27.113: 99.2568% ( 3) 00:09:28.285 27.113 - 27.229: 99.2684% ( 1) 00:09:28.285 27.345 - 27.462: 99.2800% ( 1) 00:09:28.285 27.695 - 27.811: 99.2916% ( 1) 00:09:28.285 27.927 - 28.044: 99.3148% ( 2) 00:09:28.285 28.044 - 28.160: 99.3264% ( 1) 00:09:28.285 28.276 - 28.393: 99.3381% ( 1) 00:09:28.285 28.742 - 28.858: 99.3497% ( 1) 00:09:28.285 29.673 - 29.789: 99.3613% ( 1) 00:09:28.285 29.789 - 30.022: 99.3729% ( 1) 00:09:28.285 30.022 - 30.255: 99.4193% ( 4) 00:09:28.285 30.255 - 30.487: 99.4542% ( 3) 00:09:28.285 30.487 - 30.720: 99.5471% ( 8) 00:09:28.285 30.720 - 30.953: 99.5935% ( 4) 00:09:28.285 30.953 - 31.185: 99.6516% ( 5) 00:09:28.285 31.185 - 31.418: 99.6632% ( 1) 00:09:28.285 31.418 - 31.651: 99.6864% ( 2) 00:09:28.285 31.651 - 31.884: 99.7097% ( 2) 00:09:28.285 31.884 - 32.116: 99.7561% ( 4) 00:09:28.285 32.116 - 32.349: 99.7677% ( 1) 00:09:28.285 32.815 - 33.047: 99.8026% ( 3) 00:09:28.285 33.280 - 33.513: 99.8142% ( 1) 00:09:28.285 34.444 - 34.676: 99.8258% ( 1) 00:09:28.285 36.538 - 36.771: 99.8374% ( 1) 00:09:28.285 37.004 - 37.236: 99.8490% ( 1) 00:09:28.285 37.935 - 38.167: 99.8606% ( 1) 00:09:28.285 38.865 - 39.098: 99.8723% ( 1) 00:09:28.285 40.029 - 40.262: 99.8839% ( 1) 00:09:28.285 40.262 - 40.495: 99.9187% ( 3) 00:09:28.285 40.495 - 40.727: 99.9303% ( 1) 00:09:28.285 41.891 - 42.124: 99.9419% ( 1) 00:09:28.285 45.615 - 45.847: 99.9535% ( 1) 00:09:28.285 88.436 - 88.902: 99.9652% ( 1) 00:09:28.285 94.022 - 94.487: 99.9768% ( 1) 00:09:28.285 94.953 - 95.418: 99.9884% ( 1) 00:09:28.285 99.142 - 99.607: 100.0000% ( 1) 00:09:28.285 00:09:28.285 00:09:28.285 real 0m1.265s 00:09:28.285 user 0m1.096s 00:09:28.285 sys 0m0.124s 00:09:28.285 02:56:14 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:28.285 02:56:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:28.285 ************************************ 00:09:28.285 END TEST nvme_overhead 00:09:28.285 ************************************ 00:09:28.285 02:56:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:28.285 02:56:14 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:28.285 02:56:14 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:28.285 02:56:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.285 ************************************ 00:09:28.285 START TEST nvme_arbitration 00:09:28.285 ************************************ 00:09:28.285 02:56:14 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:31.574 Initializing NVMe Controllers 00:09:31.574 Attached to 0000:00:10.0 00:09:31.574 Attached to 0000:00:11.0 00:09:31.574 Attached to 0000:00:13.0 00:09:31.574 Attached to 0000:00:12.0 00:09:31.574 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:31.574 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:31.574 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:31.574 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:31.574 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:31.574 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:31.574 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:31.574 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:31.574 Initialization complete. Launching workers. 00:09:31.574 Starting thread on core 1 with urgent priority queue 00:09:31.574 Starting thread on core 2 with urgent priority queue 00:09:31.574 Starting thread on core 3 with urgent priority queue 00:09:31.574 Starting thread on core 0 with urgent priority queue 00:09:31.574 QEMU NVMe Ctrl (12340 ) core 0: 5482.67 IO/s 18.24 secs/100000 ios 00:09:31.574 QEMU NVMe Ctrl (12342 ) core 0: 5482.67 IO/s 18.24 secs/100000 ios 00:09:31.574 QEMU NVMe Ctrl (12341 ) core 1: 5376.00 IO/s 18.60 secs/100000 ios 00:09:31.574 QEMU NVMe Ctrl (12342 ) core 1: 5376.00 IO/s 18.60 secs/100000 ios 00:09:31.574 QEMU NVMe Ctrl (12343 ) core 2: 5632.00 IO/s 17.76 secs/100000 ios 00:09:31.574 QEMU NVMe Ctrl (12342 ) core 3: 5376.00 IO/s 18.60 secs/100000 ios 00:09:31.574 ======================================================== 00:09:31.574 00:09:31.574 00:09:31.574 real 0m3.281s 00:09:31.574 user 0m9.021s 00:09:31.574 sys 0m0.152s 00:09:31.574 02:56:17 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:31.574 02:56:17 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:31.574 ************************************ 00:09:31.574 END TEST nvme_arbitration 00:09:31.574 ************************************ 00:09:31.574 02:56:17 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:31.574 02:56:17 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:31.574 02:56:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.574 02:56:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.574 ************************************ 00:09:31.574 START TEST nvme_single_aen 00:09:31.574 ************************************ 00:09:31.574 02:56:17 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:31.833 Asynchronous Event Request test 00:09:31.833 Attached to 0000:00:10.0 00:09:31.833 Attached to 0000:00:11.0 00:09:31.833 Attached to 0000:00:13.0 00:09:31.833 Attached to 0000:00:12.0 00:09:31.833 Reset controller to setup AER completions for this process 00:09:31.833 Registering asynchronous event callbacks... 00:09:31.833 Getting orig temperature thresholds of all controllers 00:09:31.833 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.833 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.833 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.833 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.833 Setting all controllers temperature threshold low to trigger AER 00:09:31.833 Waiting for all controllers temperature threshold to be set lower 00:09:31.833 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.833 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:31.833 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.833 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:31.833 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.833 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:31.833 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.833 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:31.833 Waiting for all controllers to trigger AER and reset threshold 00:09:31.833 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.833 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.833 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.833 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.833 Cleaning up... 00:09:31.833 ************************************ 00:09:31.833 END TEST nvme_single_aen 00:09:31.833 ************************************ 00:09:31.833 00:09:31.833 real 0m0.278s 00:09:31.833 user 0m0.115s 00:09:31.833 sys 0m0.118s 00:09:31.833 02:56:17 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:31.833 02:56:17 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:31.833 02:56:17 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:31.833 02:56:17 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:31.833 02:56:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.833 02:56:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.833 ************************************ 00:09:31.833 START TEST nvme_doorbell_aers 00:09:31.833 ************************************ 00:09:31.833 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:09:31.833 02:56:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:31.834 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:32.093 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:32.093 02:56:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:32.093 02:56:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:32.093 02:56:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:32.352 [2024-05-14 02:56:18.161023] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:09:42.326 Executing: test_write_invalid_db 00:09:42.326 Waiting for AER completion... 00:09:42.326 Failure: test_write_invalid_db 00:09:42.326 00:09:42.326 Executing: test_invalid_db_write_overflow_sq 00:09:42.326 Waiting for AER completion... 00:09:42.326 Failure: test_invalid_db_write_overflow_sq 00:09:42.326 00:09:42.326 Executing: test_invalid_db_write_overflow_cq 00:09:42.326 Waiting for AER completion... 00:09:42.326 Failure: test_invalid_db_write_overflow_cq 00:09:42.326 00:09:42.326 02:56:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:42.326 02:56:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.326 [2024-05-14 02:56:28.182698] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:09:52.301 Executing: test_write_invalid_db 00:09:52.301 Waiting for AER completion... 00:09:52.301 Failure: test_write_invalid_db 00:09:52.301 00:09:52.301 Executing: test_invalid_db_write_overflow_sq 00:09:52.301 Waiting for AER completion... 00:09:52.301 Failure: test_invalid_db_write_overflow_sq 00:09:52.301 00:09:52.301 Executing: test_invalid_db_write_overflow_cq 00:09:52.301 Waiting for AER completion... 00:09:52.301 Failure: test_invalid_db_write_overflow_cq 00:09:52.301 00:09:52.301 02:56:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:52.301 02:56:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:52.301 [2024-05-14 02:56:38.231823] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:02.274 Executing: test_write_invalid_db 00:10:02.274 Waiting for AER completion... 00:10:02.274 Failure: test_write_invalid_db 00:10:02.274 00:10:02.274 Executing: test_invalid_db_write_overflow_sq 00:10:02.274 Waiting for AER completion... 00:10:02.274 Failure: test_invalid_db_write_overflow_sq 00:10:02.274 00:10:02.274 Executing: test_invalid_db_write_overflow_cq 00:10:02.274 Waiting for AER completion... 00:10:02.274 Failure: test_invalid_db_write_overflow_cq 00:10:02.274 00:10:02.274 02:56:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:02.274 02:56:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:02.274 [2024-05-14 02:56:48.280906] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.246 Executing: test_write_invalid_db 00:10:12.246 Waiting for AER completion... 00:10:12.246 Failure: test_write_invalid_db 00:10:12.246 00:10:12.246 Executing: test_invalid_db_write_overflow_sq 00:10:12.246 Waiting for AER completion... 00:10:12.246 Failure: test_invalid_db_write_overflow_sq 00:10:12.246 00:10:12.246 Executing: test_invalid_db_write_overflow_cq 00:10:12.246 Waiting for AER completion... 00:10:12.246 Failure: test_invalid_db_write_overflow_cq 00:10:12.246 00:10:12.246 00:10:12.246 real 0m40.235s 00:10:12.246 user 0m33.733s 00:10:12.246 sys 0m6.173s 00:10:12.246 02:56:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.246 02:56:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:12.246 ************************************ 00:10:12.246 END TEST nvme_doorbell_aers 00:10:12.246 ************************************ 00:10:12.246 02:56:58 nvme -- nvme/nvme.sh@97 -- # uname 00:10:12.246 02:56:58 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:12.246 02:56:58 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:12.246 02:56:58 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:12.246 02:56:58 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.246 02:56:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.246 ************************************ 00:10:12.246 START TEST nvme_multi_aen 00:10:12.246 ************************************ 00:10:12.246 02:56:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:12.505 [2024-05-14 02:56:58.350841] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.350930] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.350964] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.352463] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.352504] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.352526] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.353638] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.353682] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.353704] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.355001] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.355223] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 [2024-05-14 02:56:58.355538] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81683) is not found. Dropping the request. 00:10:12.505 Child process pid: 82207 00:10:12.763 [Child] Asynchronous Event Request test 00:10:12.763 [Child] Attached to 0000:00:10.0 00:10:12.763 [Child] Attached to 0000:00:11.0 00:10:12.763 [Child] Attached to 0000:00:13.0 00:10:12.763 [Child] Attached to 0000:00:12.0 00:10:12.763 [Child] Registering asynchronous event callbacks... 00:10:12.763 [Child] Getting orig temperature thresholds of all controllers 00:10:12.763 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.763 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.763 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.763 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.763 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:12.763 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.763 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.763 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.763 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.763 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.763 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.763 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.763 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.763 [Child] Cleaning up... 00:10:12.764 Asynchronous Event Request test 00:10:12.764 Attached to 0000:00:10.0 00:10:12.764 Attached to 0000:00:11.0 00:10:12.764 Attached to 0000:00:13.0 00:10:12.764 Attached to 0000:00:12.0 00:10:12.764 Reset controller to setup AER completions for this process 00:10:12.764 Registering asynchronous event callbacks... 00:10:12.764 Getting orig temperature thresholds of all controllers 00:10:12.764 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.764 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.764 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.764 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.764 Setting all controllers temperature threshold low to trigger AER 00:10:12.764 Waiting for all controllers temperature threshold to be set lower 00:10:12.764 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.764 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:12.764 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.764 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:12.764 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.764 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:12.764 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.764 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:12.764 Waiting for all controllers to trigger AER and reset threshold 00:10:12.764 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.764 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.764 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.764 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.764 Cleaning up... 00:10:12.764 00:10:12.764 real 0m0.537s 00:10:12.764 user 0m0.198s 00:10:12.764 sys 0m0.231s 00:10:12.764 02:56:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.764 02:56:58 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:12.764 ************************************ 00:10:12.764 END TEST nvme_multi_aen 00:10:12.764 ************************************ 00:10:12.764 02:56:58 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:12.764 02:56:58 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:12.764 02:56:58 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.764 02:56:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.764 ************************************ 00:10:12.764 START TEST nvme_startup 00:10:12.764 ************************************ 00:10:12.764 02:56:58 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:13.023 Initializing NVMe Controllers 00:10:13.023 Attached to 0000:00:10.0 00:10:13.023 Attached to 0000:00:11.0 00:10:13.023 Attached to 0000:00:13.0 00:10:13.023 Attached to 0000:00:12.0 00:10:13.023 Initialization complete. 00:10:13.023 Time used:172793.188 (us). 00:10:13.023 ************************************ 00:10:13.023 END TEST nvme_startup 00:10:13.023 ************************************ 00:10:13.023 00:10:13.023 real 0m0.246s 00:10:13.023 user 0m0.088s 00:10:13.023 sys 0m0.119s 00:10:13.023 02:56:58 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.023 02:56:58 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:13.023 02:56:59 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:13.023 02:56:59 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:13.023 02:56:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.023 02:56:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.023 ************************************ 00:10:13.023 START TEST nvme_multi_secondary 00:10:13.023 ************************************ 00:10:13.023 02:56:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:10:13.023 02:56:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=82263 00:10:13.023 02:56:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:13.023 02:56:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=82264 00:10:13.023 02:56:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:13.023 02:56:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:17.209 Initializing NVMe Controllers 00:10:17.209 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.209 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.209 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:17.209 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.209 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:17.209 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:17.209 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:17.209 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:17.209 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:17.209 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:17.209 Initialization complete. Launching workers. 00:10:17.209 ======================================================== 00:10:17.209 Latency(us) 00:10:17.209 Device Information : IOPS MiB/s Average min max 00:10:17.209 PCIE (0000:00:10.0) NSID 1 from core 1: 4821.07 18.83 3316.69 1462.01 6236.81 00:10:17.209 PCIE (0000:00:11.0) NSID 1 from core 1: 4821.07 18.83 3318.09 1579.14 6308.64 00:10:17.209 PCIE (0000:00:13.0) NSID 1 from core 1: 4821.07 18.83 3318.39 1601.64 5878.41 00:10:17.209 PCIE (0000:00:12.0) NSID 1 from core 1: 4821.07 18.83 3318.22 1448.06 6250.92 00:10:17.209 PCIE (0000:00:12.0) NSID 2 from core 1: 4821.07 18.83 3318.10 1299.35 6863.92 00:10:17.209 PCIE (0000:00:12.0) NSID 3 from core 1: 4821.07 18.83 3317.95 1584.27 6982.01 00:10:17.209 ======================================================== 00:10:17.209 Total : 28926.42 112.99 3317.91 1299.35 6982.01 00:10:17.209 00:10:17.209 Initializing NVMe Controllers 00:10:17.209 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.209 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.209 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:17.209 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.210 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:17.210 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:17.210 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:17.210 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:17.210 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:17.210 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:17.210 Initialization complete. Launching workers. 00:10:17.210 ======================================================== 00:10:17.210 Latency(us) 00:10:17.210 Device Information : IOPS MiB/s Average min max 00:10:17.210 PCIE (0000:00:10.0) NSID 1 from core 2: 2283.65 8.92 7003.69 1805.68 13258.28 00:10:17.210 PCIE (0000:00:11.0) NSID 1 from core 2: 2283.65 8.92 7005.37 1854.64 12961.01 00:10:17.210 PCIE (0000:00:13.0) NSID 1 from core 2: 2283.65 8.92 7005.10 1817.26 13314.54 00:10:17.210 PCIE (0000:00:12.0) NSID 1 from core 2: 2283.65 8.92 7004.27 1736.72 12869.27 00:10:17.210 PCIE (0000:00:12.0) NSID 2 from core 2: 2283.65 8.92 7004.28 1397.53 13804.46 00:10:17.210 PCIE (0000:00:12.0) NSID 3 from core 2: 2283.65 8.92 7003.92 1019.94 13893.29 00:10:17.210 ======================================================== 00:10:17.210 Total : 13701.88 53.52 7004.44 1019.94 13893.29 00:10:17.210 00:10:17.210 02:57:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 82263 00:10:18.582 Initializing NVMe Controllers 00:10:18.582 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:18.582 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:18.582 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:18.582 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:18.582 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:18.582 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:18.582 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:18.582 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:18.582 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:18.582 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:18.582 Initialization complete. Launching workers. 00:10:18.582 ======================================================== 00:10:18.582 Latency(us) 00:10:18.582 Device Information : IOPS MiB/s Average min max 00:10:18.582 PCIE (0000:00:10.0) NSID 1 from core 0: 7599.43 29.69 2103.87 999.31 6346.01 00:10:18.582 PCIE (0000:00:11.0) NSID 1 from core 0: 7599.43 29.69 2104.98 1015.41 6075.56 00:10:18.582 PCIE (0000:00:13.0) NSID 1 from core 0: 7599.43 29.69 2104.97 921.87 5970.69 00:10:18.582 PCIE (0000:00:12.0) NSID 1 from core 0: 7599.03 29.68 2105.06 844.49 6857.10 00:10:18.582 PCIE (0000:00:12.0) NSID 2 from core 0: 7599.43 29.69 2104.92 703.57 6680.97 00:10:18.582 PCIE (0000:00:12.0) NSID 3 from core 0: 7599.43 29.69 2104.88 555.50 6852.51 00:10:18.582 ======================================================== 00:10:18.582 Total : 45596.18 178.11 2104.78 555.50 6857.10 00:10:18.582 00:10:18.582 02:57:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 82264 00:10:18.582 02:57:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=82333 00:10:18.582 02:57:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:18.582 02:57:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=82334 00:10:18.582 02:57:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:18.582 02:57:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:21.975 Initializing NVMe Controllers 00:10:21.975 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:21.975 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:21.975 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:21.975 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:21.975 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:21.975 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:21.975 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:21.975 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:21.975 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:21.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:21.975 Initialization complete. Launching workers. 00:10:21.975 ======================================================== 00:10:21.975 Latency(us) 00:10:21.975 Device Information : IOPS MiB/s Average min max 00:10:21.975 PCIE (0000:00:10.0) NSID 1 from core 0: 5386.31 21.04 2968.51 1155.96 6909.72 00:10:21.975 PCIE (0000:00:11.0) NSID 1 from core 0: 5386.31 21.04 2969.83 1175.85 6068.12 00:10:21.975 PCIE (0000:00:13.0) NSID 1 from core 0: 5386.31 21.04 2969.76 1160.88 5984.91 00:10:21.975 PCIE (0000:00:12.0) NSID 1 from core 0: 5386.31 21.04 2969.73 1173.23 5922.99 00:10:21.975 PCIE (0000:00:12.0) NSID 2 from core 0: 5386.31 21.04 2969.61 1050.20 6641.89 00:10:21.975 PCIE (0000:00:12.0) NSID 3 from core 0: 5386.31 21.04 2969.43 757.12 6754.05 00:10:21.975 ======================================================== 00:10:21.975 Total : 32317.87 126.24 2969.48 757.12 6909.72 00:10:21.975 00:10:21.975 Initializing NVMe Controllers 00:10:21.975 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:21.975 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:21.975 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:21.975 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:21.975 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:21.975 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:21.975 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:21.975 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:21.975 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:21.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:21.975 Initialization complete. Launching workers. 00:10:21.975 ======================================================== 00:10:21.975 Latency(us) 00:10:21.975 Device Information : IOPS MiB/s Average min max 00:10:21.975 PCIE (0000:00:10.0) NSID 1 from core 1: 5406.73 21.12 2957.26 1005.13 6784.78 00:10:21.975 PCIE (0000:00:11.0) NSID 1 from core 1: 5406.73 21.12 2958.55 1030.75 6333.57 00:10:21.975 PCIE (0000:00:13.0) NSID 1 from core 1: 5406.73 21.12 2958.35 888.04 5874.27 00:10:21.975 PCIE (0000:00:12.0) NSID 1 from core 1: 5406.73 21.12 2958.15 815.45 6759.19 00:10:21.975 PCIE (0000:00:12.0) NSID 2 from core 1: 5406.73 21.12 2957.94 691.04 6709.86 00:10:21.975 PCIE (0000:00:12.0) NSID 3 from core 1: 5406.73 21.12 2957.71 575.24 6522.22 00:10:21.975 ======================================================== 00:10:21.975 Total : 32440.38 126.72 2957.99 575.24 6784.78 00:10:21.975 00:10:23.877 Initializing NVMe Controllers 00:10:23.877 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.877 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.877 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.877 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.877 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:23.877 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:23.877 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:23.877 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:23.877 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:23.877 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:23.877 Initialization complete. Launching workers. 00:10:23.877 ======================================================== 00:10:23.878 Latency(us) 00:10:23.878 Device Information : IOPS MiB/s Average min max 00:10:23.878 PCIE (0000:00:10.0) NSID 1 from core 2: 3595.15 14.04 4447.16 1004.61 13174.92 00:10:23.878 PCIE (0000:00:11.0) NSID 1 from core 2: 3595.15 14.04 4450.02 954.13 14012.62 00:10:23.878 PCIE (0000:00:13.0) NSID 1 from core 2: 3595.15 14.04 4449.97 906.44 14052.15 00:10:23.878 PCIE (0000:00:12.0) NSID 1 from core 2: 3595.15 14.04 4447.21 828.41 13382.16 00:10:23.878 PCIE (0000:00:12.0) NSID 2 from core 2: 3595.15 14.04 4445.78 687.82 13146.58 00:10:23.878 PCIE (0000:00:12.0) NSID 3 from core 2: 3595.15 14.04 4446.19 542.24 13460.42 00:10:23.878 ======================================================== 00:10:23.878 Total : 21570.87 84.26 4447.72 542.24 14052.15 00:10:23.878 00:10:23.878 ************************************ 00:10:23.878 END TEST nvme_multi_secondary 00:10:23.878 ************************************ 00:10:23.878 02:57:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 82333 00:10:23.878 02:57:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 82334 00:10:23.878 00:10:23.878 real 0m10.535s 00:10:23.878 user 0m18.407s 00:10:23.878 sys 0m0.784s 00:10:23.878 02:57:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.878 02:57:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 02:57:09 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:23.878 02:57:09 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/81281 ]] 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1086 -- # kill 81281 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1087 -- # wait 81281 00:10:23.878 [2024-05-14 02:57:09.591888] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.592026] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.592076] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.592109] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.593026] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.593126] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.593210] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.593251] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.594011] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.594086] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.594152] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.594189] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.594959] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.595043] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.595107] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 [2024-05-14 02:57:09.595178] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82206) is not found. Dropping the request. 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:10:23.878 02:57:09 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.878 02:57:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 ************************************ 00:10:23.878 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:23.878 ************************************ 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:23.878 * Looking for test storage... 00:10:23.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=82481 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 82481 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 82481 ']' 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:23.878 02:57:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.138 [2024-05-14 02:57:10.001987] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:10:24.138 [2024-05-14 02:57:10.002185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82481 ] 00:10:24.396 [2024-05-14 02:57:10.171816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:24.396 [2024-05-14 02:57:10.194931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.396 [2024-05-14 02:57:10.241810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.396 [2024-05-14 02:57:10.241962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.396 [2024-05-14 02:57:10.242388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.396 [2024-05-14 02:57:10.242473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.964 nvme0n1 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_BXGjd.txt 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.964 true 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715655430 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=82504 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:24.964 02:57:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:27.496 [2024-05-14 02:57:12.968173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:27.496 [2024-05-14 02:57:12.968546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:27.496 [2024-05-14 02:57:12.968592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:27.496 [2024-05-14 02:57:12.968627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.496 [2024-05-14 02:57:12.970590] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 82504 00:10:27.496 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 82504 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 82504 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.496 02:57:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:27.496 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.496 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:27.496 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_BXGjd.txt 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_BXGjd.txt 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 82481 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 82481 ']' 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 82481 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82481 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:27.497 killing process with pid 82481 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82481' 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 82481 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 82481 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:27.497 ************************************ 00:10:27.497 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:27.497 ************************************ 00:10:27.497 00:10:27.497 real 0m3.672s 00:10:27.497 user 0m13.013s 00:10:27.497 sys 0m0.540s 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:27.497 02:57:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:27.497 02:57:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:27.497 02:57:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:27.497 02:57:13 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:27.497 02:57:13 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:27.497 02:57:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:27.497 ************************************ 00:10:27.497 START TEST nvme_fio 00:10:27.497 ************************************ 00:10:27.497 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:10:27.497 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:27.497 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:27.497 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:27.497 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:27.497 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:10:27.497 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:27.497 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:27.497 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:27.757 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:27.757 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:27.757 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:27.757 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:27.757 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:27.757 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:27.757 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:27.757 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:27.758 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:28.017 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:28.017 02:57:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:28.017 02:57:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:28.017 02:57:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:28.017 02:57:14 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:28.017 02:57:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:28.017 02:57:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:28.017 02:57:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:28.276 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:28.276 fio-3.35 00:10:28.276 Starting 1 thread 00:10:31.564 00:10:31.564 test: (groupid=0, jobs=1): err= 0: pid=82627: Tue May 14 02:57:17 2024 00:10:31.564 read: IOPS=16.8k, BW=65.6MiB/s (68.8MB/s)(131MiB/2001msec) 00:10:31.564 slat (nsec): min=4587, max=75573, avg=5797.65, stdev=1708.53 00:10:31.564 clat (usec): min=287, max=9566, avg=3790.11, stdev=456.38 00:10:31.564 lat (usec): min=292, max=9632, avg=3795.91, stdev=456.93 00:10:31.564 clat percentiles (usec): 00:10:31.564 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3523], 00:10:31.564 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:10:31.564 | 70.00th=[ 3851], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4490], 00:10:31.564 | 99.00th=[ 4948], 99.50th=[ 5669], 99.90th=[ 7701], 99.95th=[ 8291], 00:10:31.564 | 99.99th=[ 9503] 00:10:31.564 bw ( KiB/s): min=64352, max=67928, per=97.83%, avg=65733.33, stdev=1921.74, samples=3 00:10:31.564 iops : min=16088, max=16982, avg=16433.33, stdev=480.43, samples=3 00:10:31.564 write: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(132MiB/2001msec); 0 zone resets 00:10:31.564 slat (nsec): min=4581, max=54749, avg=6086.47, stdev=1730.46 00:10:31.564 clat (usec): min=245, max=9497, avg=3799.74, stdev=454.63 00:10:31.564 lat (usec): min=251, max=9509, avg=3805.82, stdev=455.19 00:10:31.564 clat percentiles (usec): 00:10:31.564 | 1.00th=[ 2868], 5.00th=[ 3261], 10.00th=[ 3392], 20.00th=[ 3556], 00:10:31.564 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:10:31.564 | 70.00th=[ 3851], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4490], 00:10:31.564 | 99.00th=[ 5014], 99.50th=[ 5997], 99.90th=[ 7767], 99.95th=[ 8356], 00:10:31.564 | 99.99th=[ 9372] 00:10:31.564 bw ( KiB/s): min=64200, max=67616, per=97.39%, avg=65541.33, stdev=1822.25, samples=3 00:10:31.564 iops : min=16050, max=16904, avg=16385.33, stdev=455.56, samples=3 00:10:31.564 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:10:31.564 lat (msec) : 2=0.08%, 4=77.30%, 10=22.57% 00:10:31.564 cpu : usr=99.00%, sys=0.10%, ctx=5, majf=0, minf=625 00:10:31.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:31.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.564 issued rwts: total=33614,33666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.564 00:10:31.564 Run status group 0 (all jobs): 00:10:31.564 READ: bw=65.6MiB/s (68.8MB/s), 65.6MiB/s-65.6MiB/s (68.8MB/s-68.8MB/s), io=131MiB (138MB), run=2001-2001msec 00:10:31.564 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=132MiB (138MB), run=2001-2001msec 00:10:31.564 ----------------------------------------------------- 00:10:31.564 Suppressions used: 00:10:31.564 count bytes template 00:10:31.564 1 32 /usr/src/fio/parse.c 00:10:31.564 1 8 libtcmalloc_minimal.so 00:10:31.564 ----------------------------------------------------- 00:10:31.564 00:10:31.564 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:31.564 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:31.564 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:31.564 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:31.823 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:31.823 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:32.082 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:32.082 02:57:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:32.082 02:57:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:32.341 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:32.341 fio-3.35 00:10:32.341 Starting 1 thread 00:10:35.625 00:10:35.625 test: (groupid=0, jobs=1): err= 0: pid=82692: Tue May 14 02:57:21 2024 00:10:35.625 read: IOPS=15.6k, BW=60.9MiB/s (63.8MB/s)(122MiB/2001msec) 00:10:35.625 slat (nsec): min=4581, max=49546, avg=6328.72, stdev=1860.95 00:10:35.625 clat (usec): min=374, max=11225, avg=4087.85, stdev=510.77 00:10:35.625 lat (usec): min=380, max=11275, avg=4094.18, stdev=511.42 00:10:35.625 clat percentiles (usec): 00:10:35.625 | 1.00th=[ 3228], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3654], 00:10:35.625 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4178], 00:10:35.625 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:10:35.625 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 9765], 00:10:35.625 | 99.99th=[10945] 00:10:35.625 bw ( KiB/s): min=61152, max=62992, per=99.58%, avg=62087.67, stdev=920.40, samples=3 00:10:35.625 iops : min=15288, max=15748, avg=15521.67, stdev=230.09, samples=3 00:10:35.625 write: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(122MiB/2001msec); 0 zone resets 00:10:35.625 slat (nsec): min=4686, max=96905, avg=6542.28, stdev=2070.18 00:10:35.625 clat (usec): min=322, max=10951, avg=4096.78, stdev=518.65 00:10:35.625 lat (usec): min=328, max=11014, avg=4103.33, stdev=519.35 00:10:35.625 clat percentiles (usec): 00:10:35.625 | 1.00th=[ 3261], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3654], 00:10:35.625 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:10:35.625 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:10:35.625 | 99.00th=[ 6325], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 9896], 00:10:35.625 | 99.99th=[10683] 00:10:35.625 bw ( KiB/s): min=61480, max=61816, per=98.82%, avg=61637.67, stdev=168.95, samples=3 00:10:35.625 iops : min=15370, max=15454, avg=15409.33, stdev=42.25, samples=3 00:10:35.625 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:35.625 lat (msec) : 2=0.05%, 4=30.89%, 10=68.98%, 20=0.04% 00:10:35.625 cpu : usr=99.05%, sys=0.00%, ctx=4, majf=0, minf=625 00:10:35.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:35.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.625 issued rwts: total=31190,31203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.625 00:10:35.626 Run status group 0 (all jobs): 00:10:35.626 READ: bw=60.9MiB/s (63.8MB/s), 60.9MiB/s-60.9MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:35.626 WRITE: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:35.626 ----------------------------------------------------- 00:10:35.626 Suppressions used: 00:10:35.626 count bytes template 00:10:35.626 1 32 /usr/src/fio/parse.c 00:10:35.626 1 8 libtcmalloc_minimal.so 00:10:35.626 ----------------------------------------------------- 00:10:35.626 00:10:35.626 02:57:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:35.626 02:57:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:35.626 02:57:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:35.626 02:57:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:35.884 02:57:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:35.884 02:57:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:36.143 02:57:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:36.143 02:57:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:36.143 02:57:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:36.402 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:36.402 fio-3.35 00:10:36.402 Starting 1 thread 00:10:39.684 00:10:39.684 test: (groupid=0, jobs=1): err= 0: pid=82753: Tue May 14 02:57:25 2024 00:10:39.684 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec) 00:10:39.684 slat (nsec): min=4569, max=58647, avg=5768.45, stdev=1537.40 00:10:39.684 clat (usec): min=281, max=10469, avg=3657.30, stdev=459.44 00:10:39.684 lat (usec): min=287, max=10528, avg=3663.07, stdev=459.99 00:10:39.684 clat percentiles (usec): 00:10:39.684 | 1.00th=[ 2671], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:10:39.684 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3589], 00:10:39.684 | 70.00th=[ 3654], 80.00th=[ 3818], 90.00th=[ 4293], 95.00th=[ 4424], 00:10:39.684 | 99.00th=[ 4686], 99.50th=[ 5604], 99.90th=[ 6980], 99.95th=[ 8979], 00:10:39.684 | 99.99th=[10421] 00:10:39.684 bw ( KiB/s): min=68128, max=68936, per=98.61%, avg=68656.00, stdev=457.54, samples=3 00:10:39.684 iops : min=17032, max=17234, avg=17164.00, stdev=114.39, samples=3 00:10:39.684 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(136MiB/2001msec); 0 zone resets 00:10:39.684 slat (nsec): min=4687, max=61623, avg=5936.93, stdev=1516.75 00:10:39.684 clat (usec): min=222, max=10378, avg=3668.58, stdev=465.79 00:10:39.684 lat (usec): min=227, max=10390, avg=3674.51, stdev=466.30 00:10:39.684 clat percentiles (usec): 00:10:39.684 | 1.00th=[ 2671], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3425], 00:10:39.684 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:10:39.684 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4359], 95.00th=[ 4490], 00:10:39.684 | 99.00th=[ 4686], 99.50th=[ 5604], 99.90th=[ 7373], 99.95th=[ 9110], 00:10:39.684 | 99.99th=[10159] 00:10:39.684 bw ( KiB/s): min=68200, max=68944, per=98.25%, avg=68498.67, stdev=393.09, samples=3 00:10:39.684 iops : min=17050, max=17236, avg=17124.67, stdev=98.27, samples=3 00:10:39.684 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:39.684 lat (msec) : 2=0.29%, 4=80.96%, 10=18.69%, 20=0.02% 00:10:39.684 cpu : usr=99.00%, sys=0.15%, ctx=4, majf=0, minf=627 00:10:39.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:39.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.684 issued rwts: total=34828,34875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.685 00:10:39.685 Run status group 0 (all jobs): 00:10:39.685 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:10:39.685 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=136MiB (143MB), run=2001-2001msec 00:10:39.942 ----------------------------------------------------- 00:10:39.943 Suppressions used: 00:10:39.943 count bytes template 00:10:39.943 1 32 /usr/src/fio/parse.c 00:10:39.943 1 8 libtcmalloc_minimal.so 00:10:39.943 ----------------------------------------------------- 00:10:39.943 00:10:39.943 02:57:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:39.943 02:57:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:39.943 02:57:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:39.943 02:57:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:40.201 02:57:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:40.201 02:57:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:40.459 02:57:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:40.459 02:57:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:40.459 02:57:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:40.716 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:40.716 fio-3.35 00:10:40.716 Starting 1 thread 00:10:43.997 00:10:43.997 test: (groupid=0, jobs=1): err= 0: pid=82814: Tue May 14 02:57:29 2024 00:10:43.997 read: IOPS=16.4k, BW=64.3MiB/s (67.4MB/s)(129MiB/2001msec) 00:10:43.997 slat (nsec): min=4204, max=75253, avg=5942.41, stdev=2353.08 00:10:43.997 clat (usec): min=342, max=11185, avg=3865.46, stdev=691.40 00:10:43.997 lat (usec): min=348, max=11250, avg=3871.40, stdev=692.29 00:10:43.997 clat percentiles (usec): 00:10:43.997 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3425], 00:10:43.997 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3785], 00:10:43.997 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5014], 00:10:43.997 | 99.00th=[ 6980], 99.50th=[ 7635], 99.90th=[ 7963], 99.95th=[ 9896], 00:10:43.997 | 99.99th=[11076] 00:10:43.997 bw ( KiB/s): min=64888, max=74976, per=100.00%, avg=69042.67, stdev=5273.96, samples=3 00:10:43.997 iops : min=16222, max=18744, avg=17260.67, stdev=1318.49, samples=3 00:10:43.997 write: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(129MiB/2001msec); 0 zone resets 00:10:43.997 slat (nsec): min=4252, max=78819, avg=6061.90, stdev=2458.06 00:10:43.997 clat (usec): min=314, max=11116, avg=3882.03, stdev=713.25 00:10:43.997 lat (usec): min=320, max=11127, avg=3888.09, stdev=714.15 00:10:43.997 clat percentiles (usec): 00:10:43.997 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3425], 00:10:43.997 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3785], 00:10:43.997 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5080], 00:10:43.997 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[10028], 00:10:43.997 | 99.99th=[10945] 00:10:43.997 bw ( KiB/s): min=65288, max=74560, per=100.00%, avg=68936.00, stdev=4941.75, samples=3 00:10:43.997 iops : min=16322, max=18640, avg=17234.00, stdev=1235.44, samples=3 00:10:43.997 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:43.997 lat (msec) : 2=0.06%, 4=70.24%, 10=29.62%, 20=0.05% 00:10:43.997 cpu : usr=99.05%, sys=0.00%, ctx=3, majf=0, minf=623 00:10:43.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:43.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.997 issued rwts: total=32916,32997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.997 00:10:43.997 Run status group 0 (all jobs): 00:10:43.997 READ: bw=64.3MiB/s (67.4MB/s), 64.3MiB/s-64.3MiB/s (67.4MB/s-67.4MB/s), io=129MiB (135MB), run=2001-2001msec 00:10:43.997 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=129MiB (135MB), run=2001-2001msec 00:10:43.997 ----------------------------------------------------- 00:10:43.997 Suppressions used: 00:10:43.997 count bytes template 00:10:43.997 1 32 /usr/src/fio/parse.c 00:10:43.997 1 8 libtcmalloc_minimal.so 00:10:43.997 ----------------------------------------------------- 00:10:43.997 00:10:43.997 02:57:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:43.997 02:57:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:43.997 00:10:43.997 real 0m16.465s 00:10:43.997 user 0m13.411s 00:10:43.997 sys 0m1.480s 00:10:43.997 02:57:29 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:43.997 ************************************ 00:10:43.997 END TEST nvme_fio 00:10:43.997 ************************************ 00:10:43.997 02:57:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:43.997 ************************************ 00:10:43.997 END TEST nvme 00:10:43.997 ************************************ 00:10:43.997 00:10:43.997 real 1m25.549s 00:10:43.997 user 3m31.806s 00:10:43.997 sys 0m13.305s 00:10:43.997 02:57:29 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:43.997 02:57:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.997 02:57:30 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:43.997 02:57:30 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:43.997 02:57:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:43.997 02:57:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:43.997 02:57:30 -- common/autotest_common.sh@10 -- # set +x 00:10:43.997 ************************************ 00:10:43.997 START TEST nvme_scc 00:10:43.997 ************************************ 00:10:43.997 02:57:30 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:44.255 * Looking for test storage... 00:10:44.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:44.255 02:57:30 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.255 02:57:30 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.255 02:57:30 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.255 02:57:30 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.255 02:57:30 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.255 02:57:30 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.255 02:57:30 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.255 02:57:30 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:44.255 02:57:30 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:44.255 02:57:30 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:44.256 02:57:30 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:44.256 02:57:30 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:44.256 02:57:30 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:44.256 02:57:30 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:44.256 02:57:30 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.256 02:57:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:44.256 02:57:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:44.256 02:57:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:44.256 02:57:30 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:44.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:44.771 Waiting for block devices as requested 00:10:44.771 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.771 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.029 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.029 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.303 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:50.303 02:57:36 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:50.303 02:57:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:50.303 02:57:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:50.303 02:57:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:50.303 02:57:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:50.303 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:50.304 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:50.305 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:50.306 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:50.307 02:57:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:50.307 02:57:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:50.307 02:57:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:50.307 02:57:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.307 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.308 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.309 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:50.310 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:50.311 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:50.312 02:57:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:50.312 02:57:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:50.312 02:57:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:50.312 02:57:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.312 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:50.313 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.314 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:50.577 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:50.577 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:50.578 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.579 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.580 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:50.581 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.582 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.583 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:50.584 02:57:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:50.584 02:57:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:50.584 02:57:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:50.584 02:57:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:50.584 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:50.585 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:50.586 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:50.587 02:57:36 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:50.587 02:57:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:50.588 02:57:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:50.588 02:57:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:50.588 02:57:36 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:51.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.723 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.723 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.723 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.723 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.723 02:57:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:51.723 02:57:37 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:51.723 02:57:37 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:51.723 02:57:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:51.723 ************************************ 00:10:51.723 START TEST nvme_simple_copy 00:10:51.723 ************************************ 00:10:51.723 02:57:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:52.128 Initializing NVMe Controllers 00:10:52.128 Attaching to 0000:00:10.0 00:10:52.128 Controller supports SCC. Attached to 0000:00:10.0 00:10:52.128 Namespace ID: 1 size: 6GB 00:10:52.128 Initialization complete. 00:10:52.128 00:10:52.128 Controller QEMU NVMe Ctrl (12340 ) 00:10:52.128 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:52.128 Namespace Block Size:4096 00:10:52.128 Writing LBAs 0 to 63 with Random Data 00:10:52.128 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:52.128 LBAs matching Written Data: 64 00:10:52.128 00:10:52.128 real 0m0.287s 00:10:52.128 user 0m0.110s 00:10:52.128 sys 0m0.076s 00:10:52.128 02:57:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.128 02:57:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 ************************************ 00:10:52.128 END TEST nvme_simple_copy 00:10:52.128 ************************************ 00:10:52.128 00:10:52.128 real 0m8.055s 00:10:52.128 user 0m1.321s 00:10:52.128 sys 0m1.616s 00:10:52.128 02:57:38 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.128 ************************************ 00:10:52.128 END TEST nvme_scc 00:10:52.128 ************************************ 00:10:52.128 02:57:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 02:57:38 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:52.128 02:57:38 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:52.128 02:57:38 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:52.128 02:57:38 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:52.128 02:57:38 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:52.128 02:57:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:52.128 02:57:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:52.128 02:57:38 -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 ************************************ 00:10:52.128 START TEST nvme_fdp 00:10:52.128 ************************************ 00:10:52.128 02:57:38 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:10:52.389 * Looking for test storage... 00:10:52.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:52.389 02:57:38 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.389 02:57:38 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.389 02:57:38 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.389 02:57:38 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.389 02:57:38 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 02:57:38 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 02:57:38 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 02:57:38 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:52.389 02:57:38 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:52.389 02:57:38 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:52.389 02:57:38 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.389 02:57:38 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:52.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:52.907 Waiting for block devices as requested 00:10:52.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.907 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.166 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.449 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:58.449 02:57:44 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:58.449 02:57:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:58.449 02:57:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:58.449 02:57:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:58.449 02:57:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.449 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:58.450 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:58.451 02:57:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:58.451 02:57:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:58.451 02:57:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:58.451 02:57:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.451 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.452 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:58.453 02:57:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:58.453 02:57:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:58.453 02:57:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:58.453 02:57:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.453 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:58.454 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:58.455 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:58.456 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:58.457 02:57:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:58.457 02:57:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:58.457 02:57:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:58.457 02:57:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.457 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:58.458 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:58.459 02:57:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@204 -- # trap - ERR 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@204 -- # print_backtrace 00:10:58.717 02:57:44 nvme_fdp -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:10:58.717 02:57:44 nvme_fdp -- common/autotest_common.sh@1149 -- # return 0 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@204 -- # trap - ERR 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@204 -- # print_backtrace 00:10:58.717 02:57:44 nvme_fdp -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:10:58.717 02:57:44 nvme_fdp -- common/autotest_common.sh@1149 -- # return 0 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:10:58.717 02:57:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:58.717 02:57:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:13.0 00:10:58.717 02:57:44 nvme_fdp -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:58.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:59.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:59.910 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:59.910 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:59.910 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:59.910 02:57:45 nvme_fdp -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:59.910 02:57:45 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:59.910 02:57:45 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:59.910 02:57:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:59.910 ************************************ 00:10:59.910 START TEST nvme_flexible_data_placement 00:10:59.910 ************************************ 00:10:59.910 02:57:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:00.169 Initializing NVMe Controllers 00:11:00.169 Attaching to 0000:00:13.0 00:11:00.169 Controller supports FDP Attached to 0000:00:13.0 00:11:00.169 Namespace ID: 1 Endurance Group ID: 1 00:11:00.169 Initialization complete. 00:11:00.169 00:11:00.169 ================================== 00:11:00.169 == FDP tests for Namespace: #01 == 00:11:00.169 ================================== 00:11:00.169 00:11:00.169 Get Feature: FDP: 00:11:00.169 ================= 00:11:00.169 Enabled: Yes 00:11:00.169 FDP configuration Index: 0 00:11:00.169 00:11:00.169 FDP configurations log page 00:11:00.169 =========================== 00:11:00.169 Number of FDP configurations: 1 00:11:00.169 Version: 0 00:11:00.169 Size: 112 00:11:00.169 FDP Configuration Descriptor: 0 00:11:00.169 Descriptor Size: 96 00:11:00.169 Reclaim Group Identifier format: 2 00:11:00.169 FDP Volatile Write Cache: Not Present 00:11:00.169 FDP Configuration: Valid 00:11:00.169 Vendor Specific Size: 0 00:11:00.169 Number of Reclaim Groups: 2 00:11:00.169 Number of Recalim Unit Handles: 8 00:11:00.169 Max Placement Identifiers: 128 00:11:00.169 Number of Namespaces Suppprted: 256 00:11:00.169 Reclaim unit Nominal Size: 6000000 bytes 00:11:00.169 Estimated Reclaim Unit Time Limit: Not Reported 00:11:00.169 RUH Desc #000: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #001: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #002: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #003: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #004: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #005: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #006: RUH Type: Initially Isolated 00:11:00.169 RUH Desc #007: RUH Type: Initially Isolated 00:11:00.169 00:11:00.169 FDP reclaim unit handle usage log page 00:11:00.169 ====================================== 00:11:00.169 Number of Reclaim Unit Handles: 8 00:11:00.169 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:00.169 RUH Usage Desc #001: RUH Attributes: Unused 00:11:00.169 RUH Usage Desc #002: RUH Attributes: Unused 00:11:00.169 RUH Usage Desc #003: RUH Attributes: Unused 00:11:00.169 RUH Usage Desc #004: RUH Attributes: Unused 00:11:00.169 RUH Usage Desc #005: RUH Attributes: Unused 00:11:00.169 RUH Usage Desc #006: RUH Attributes: Unused 00:11:00.169 RUH Usage Desc #007: RUH Attributes: Unused 00:11:00.169 00:11:00.169 FDP statistics log page 00:11:00.169 ======================= 00:11:00.169 Host bytes with metadata written: 1821507584 00:11:00.169 Media bytes with metadata written: 1821798400 00:11:00.169 Media bytes erased: 0 00:11:00.169 00:11:00.169 FDP Reclaim unit handle status 00:11:00.169 ============================== 00:11:00.169 Number of RUHS descriptors: 2 00:11:00.169 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000056e0 00:11:00.169 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:00.169 00:11:00.169 FDP write on placement id: 0 success 00:11:00.169 00:11:00.169 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:00.169 00:11:00.169 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:00.169 00:11:00.169 Get Feature: FDP Events for Placement handle: #0 00:11:00.169 ======================== 00:11:00.169 Number of FDP Events: 6 00:11:00.169 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:00.169 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:00.169 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:00.169 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:00.169 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:00.169 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:00.169 00:11:00.169 FDP events log page 00:11:00.169 =================== 00:11:00.169 Number of FDP events: 1 00:11:00.169 FDP Event #0: 00:11:00.169 Event Type: RU Not Written to Capacity 00:11:00.169 Placement Identifier: Valid 00:11:00.169 NSID: Valid 00:11:00.169 Location: Valid 00:11:00.169 Placement Identifier: 0 00:11:00.169 Event Timestamp: 4 00:11:00.169 Namespace Identifier: 1 00:11:00.169 Reclaim Group Identifier: 0 00:11:00.169 Reclaim Unit Handle Identifier: 0 00:11:00.169 00:11:00.169 FDP test passed 00:11:00.169 00:11:00.169 real 0m0.263s 00:11:00.169 user 0m0.073s 00:11:00.169 sys 0m0.088s 00:11:00.169 02:57:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.169 ************************************ 00:11:00.169 END TEST nvme_flexible_data_placement 00:11:00.169 02:57:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 ************************************ 00:11:00.169 00:11:00.169 real 0m7.925s 00:11:00.169 user 0m1.234s 00:11:00.169 sys 0m1.673s 00:11:00.169 02:57:46 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.169 ************************************ 00:11:00.169 END TEST nvme_fdp 00:11:00.169 ************************************ 00:11:00.169 02:57:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 02:57:46 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:00.169 02:57:46 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:00.169 02:57:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:00.169 02:57:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.169 02:57:46 -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 ************************************ 00:11:00.169 START TEST nvme_rpc 00:11:00.169 ************************************ 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:00.169 * Looking for test storage... 00:11:00.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:00.169 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:00.169 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:00.169 02:57:46 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:00.428 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:00.428 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=84147 00:11:00.428 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:00.428 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:00.428 02:57:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 84147 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 84147 ']' 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:00.428 02:57:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.428 [2024-05-14 02:57:46.363589] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:11:00.428 [2024-05-14 02:57:46.363784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84147 ] 00:11:00.686 [2024-05-14 02:57:46.512507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:00.686 [2024-05-14 02:57:46.533340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.686 [2024-05-14 02:57:46.577065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.686 [2024-05-14 02:57:46.577075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.621 02:57:47 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:01.621 02:57:47 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:01.621 02:57:47 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:01.621 Nvme0n1 00:11:01.880 02:57:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:01.880 02:57:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:01.880 request: 00:11:01.880 { 00:11:01.880 "filename": "non_existing_file", 00:11:01.880 "bdev_name": "Nvme0n1", 00:11:01.880 "method": "bdev_nvme_apply_firmware", 00:11:01.880 "req_id": 1 00:11:01.880 } 00:11:01.880 Got JSON-RPC error response 00:11:01.880 response: 00:11:01.880 { 00:11:01.880 "code": -32603, 00:11:01.880 "message": "open file failed." 00:11:01.880 } 00:11:01.880 02:57:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:01.880 02:57:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:01.880 02:57:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:02.138 02:57:48 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:02.138 02:57:48 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 84147 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 84147 ']' 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 84147 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84147 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:02.139 killing process with pid 84147 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84147' 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@965 -- # kill 84147 00:11:02.139 02:57:48 nvme_rpc -- common/autotest_common.sh@970 -- # wait 84147 00:11:02.397 00:11:02.397 real 0m2.288s 00:11:02.397 user 0m4.576s 00:11:02.397 sys 0m0.572s 00:11:02.397 02:57:48 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.397 02:57:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.397 ************************************ 00:11:02.397 END TEST nvme_rpc 00:11:02.397 ************************************ 00:11:02.397 02:57:48 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:02.397 02:57:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:02.397 02:57:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.397 02:57:48 -- common/autotest_common.sh@10 -- # set +x 00:11:02.657 ************************************ 00:11:02.657 START TEST nvme_rpc_timeouts 00:11:02.657 ************************************ 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:02.657 * Looking for test storage... 00:11:02.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_84201 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_84201 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=84225 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:02.657 02:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 84225 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 84225 ']' 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:02.657 02:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:02.657 [2024-05-14 02:57:48.626468] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:11:02.657 [2024-05-14 02:57:48.626667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84225 ] 00:11:02.916 [2024-05-14 02:57:48.775321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:02.916 [2024-05-14 02:57:48.795836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:02.916 [2024-05-14 02:57:48.830806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.916 [2024-05-14 02:57:48.830855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.853 02:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:03.853 02:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:11:03.853 Checking default timeout settings: 00:11:03.853 02:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:03.853 02:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:03.853 Making settings changes with rpc: 00:11:03.853 02:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:03.853 02:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:04.119 Check default vs. modified settings: 00:11:04.119 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:04.119 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:04.686 Setting action_on_timeout is changed as expected. 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:04.686 Setting timeout_us is changed as expected. 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:04.686 Setting timeout_admin_us is changed as expected. 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_84201 /tmp/settings_modified_84201 00:11:04.686 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 84225 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 84225 ']' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 84225 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84225 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:04.686 killing process with pid 84225 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84225' 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 84225 00:11:04.686 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 84225 00:11:04.945 RPC TIMEOUT SETTING TEST PASSED. 00:11:04.945 02:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:04.945 00:11:04.945 real 0m2.385s 00:11:04.945 user 0m4.930s 00:11:04.945 sys 0m0.510s 00:11:04.945 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.945 ************************************ 00:11:04.945 END TEST nvme_rpc_timeouts 00:11:04.945 ************************************ 00:11:04.945 02:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:04.945 02:57:50 -- spdk/autotest.sh@239 -- # uname -s 00:11:04.945 02:57:50 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:04.945 02:57:50 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:04.945 02:57:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:04.945 02:57:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.945 02:57:50 -- common/autotest_common.sh@10 -- # set +x 00:11:04.945 ************************************ 00:11:04.945 START TEST sw_hotplug 00:11:04.945 ************************************ 00:11:04.945 02:57:50 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:04.945 * Looking for test storage... 00:11:04.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.945 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:05.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.516 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.516 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.516 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.516 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:05.516 02:57:51 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:05.516 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:11:05.516 02:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.774 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:11:05.774 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:05.774 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=84565 00:11:05.774 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:11:05.775 02:57:51 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:05.775 02:57:51 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:05.775 02:57:51 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:05.775 02:57:51 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:05.775 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:11:06.033 Initializing NVMe Controllers 00:11:06.033 Attaching to 0000:00:10.0 00:11:06.033 Attaching to 0000:00:11.0 00:11:06.033 Attaching to 0000:00:12.0 00:11:06.033 Attaching to 0000:00:13.0 00:11:06.033 Attached to 0000:00:11.0 00:11:06.033 Attached to 0000:00:13.0 00:11:06.033 Attached to 0000:00:10.0 00:11:06.033 Attached to 0000:00:12.0 00:11:06.033 Initialization complete. Starting I/O... 00:11:06.033 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:06.033 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:11:06.033 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:06.033 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:11:06.033 00:11:06.968 QEMU NVMe Ctrl (12341 ): 1200 I/Os completed (+1200) 00:11:06.968 QEMU NVMe Ctrl (12343 ): 1339 I/Os completed (+1339) 00:11:06.968 QEMU NVMe Ctrl (12340 ): 1225 I/Os completed (+1225) 00:11:06.968 QEMU NVMe Ctrl (12342 ): 1228 I/Os completed (+1228) 00:11:06.968 00:11:07.905 QEMU NVMe Ctrl (12341 ): 2505 I/Os completed (+1305) 00:11:07.905 QEMU NVMe Ctrl (12343 ): 2734 I/Os completed (+1395) 00:11:07.905 QEMU NVMe Ctrl (12340 ): 2569 I/Os completed (+1344) 00:11:07.905 QEMU NVMe Ctrl (12342 ): 2580 I/Os completed (+1352) 00:11:07.905 00:11:08.842 QEMU NVMe Ctrl (12341 ): 4249 I/Os completed (+1744) 00:11:08.842 QEMU NVMe Ctrl (12343 ): 4591 I/Os completed (+1857) 00:11:08.842 QEMU NVMe Ctrl (12340 ): 4341 I/Os completed (+1772) 00:11:08.842 QEMU NVMe Ctrl (12342 ): 4385 I/Os completed (+1805) 00:11:08.842 00:11:10.220 QEMU NVMe Ctrl (12341 ): 6084 I/Os completed (+1835) 00:11:10.220 QEMU NVMe Ctrl (12343 ): 6503 I/Os completed (+1912) 00:11:10.220 QEMU NVMe Ctrl (12340 ): 6178 I/Os completed (+1837) 00:11:10.220 QEMU NVMe Ctrl (12342 ): 6257 I/Os completed (+1872) 00:11:10.220 00:11:11.157 QEMU NVMe Ctrl (12341 ): 7976 I/Os completed (+1892) 00:11:11.157 QEMU NVMe Ctrl (12343 ): 8480 I/Os completed (+1977) 00:11:11.157 QEMU NVMe Ctrl (12340 ): 8097 I/Os completed (+1919) 00:11:11.157 QEMU NVMe Ctrl (12342 ): 8225 I/Os completed (+1968) 00:11:11.157 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:11.726 [2024-05-14 02:57:57.639639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:11.726 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:11.726 [2024-05-14 02:57:57.641979] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.642063] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.642099] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.642123] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:11.726 [2024-05-14 02:57:57.644533] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.644596] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.644637] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.644658] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:11.726 EAL: Scan for (pci) bus failed. 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:11.726 [2024-05-14 02:57:57.665509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:11.726 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:11.726 [2024-05-14 02:57:57.667401] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.667494] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.667525] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.667552] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:11.726 [2024-05-14 02:57:57.669965] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.670020] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.670050] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 [2024-05-14 02:57:57.670074] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.726 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:11.726 EAL: Scan for (pci) bus failed. 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:11.726 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:11.985 QEMU NVMe Ctrl (12343 ): 10484 I/Os completed (+2004) 00:11:11.985 QEMU NVMe Ctrl (12342 ): 10233 I/Os completed (+2008) 00:11:11.985 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:11.985 Attaching to 0000:00:10.0 00:11:11.985 Attached to 0000:00:10.0 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:11.985 02:57:57 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:11.985 Attaching to 0000:00:11.0 00:11:11.985 Attached to 0000:00:11.0 00:11:12.921 QEMU NVMe Ctrl (12343 ): 12360 I/Os completed (+1876) 00:11:12.921 QEMU NVMe Ctrl (12342 ): 12197 I/Os completed (+1964) 00:11:12.921 QEMU NVMe Ctrl (12340 ): 1843 I/Os completed (+1843) 00:11:12.921 QEMU NVMe Ctrl (12341 ): 1674 I/Os completed (+1674) 00:11:12.921 00:11:13.857 QEMU NVMe Ctrl (12343 ): 14111 I/Os completed (+1751) 00:11:13.857 QEMU NVMe Ctrl (12342 ): 14100 I/Os completed (+1903) 00:11:13.857 QEMU NVMe Ctrl (12340 ): 3612 I/Os completed (+1769) 00:11:13.857 QEMU NVMe Ctrl (12341 ): 3450 I/Os completed (+1776) 00:11:13.857 00:11:15.230 QEMU NVMe Ctrl (12343 ): 15873 I/Os completed (+1762) 00:11:15.230 QEMU NVMe Ctrl (12342 ): 15972 I/Os completed (+1872) 00:11:15.230 QEMU NVMe Ctrl (12340 ): 5400 I/Os completed (+1788) 00:11:15.230 QEMU NVMe Ctrl (12341 ): 5252 I/Os completed (+1802) 00:11:15.230 00:11:16.166 QEMU NVMe Ctrl (12343 ): 17708 I/Os completed (+1835) 00:11:16.166 QEMU NVMe Ctrl (12342 ): 17889 I/Os completed (+1917) 00:11:16.166 QEMU NVMe Ctrl (12340 ): 7256 I/Os completed (+1856) 00:11:16.166 QEMU NVMe Ctrl (12341 ): 7145 I/Os completed (+1893) 00:11:16.166 00:11:17.101 QEMU NVMe Ctrl (12343 ): 19445 I/Os completed (+1737) 00:11:17.101 QEMU NVMe Ctrl (12342 ): 19696 I/Os completed (+1807) 00:11:17.101 QEMU NVMe Ctrl (12340 ): 9012 I/Os completed (+1756) 00:11:17.101 QEMU NVMe Ctrl (12341 ): 8934 I/Os completed (+1789) 00:11:17.101 00:11:18.035 QEMU NVMe Ctrl (12343 ): 21363 I/Os completed (+1918) 00:11:18.035 QEMU NVMe Ctrl (12342 ): 21656 I/Os completed (+1960) 00:11:18.035 QEMU NVMe Ctrl (12340 ): 10930 I/Os completed (+1918) 00:11:18.035 QEMU NVMe Ctrl (12341 ): 10868 I/Os completed (+1934) 00:11:18.035 00:11:18.969 QEMU NVMe Ctrl (12343 ): 23064 I/Os completed (+1701) 00:11:18.969 QEMU NVMe Ctrl (12342 ): 23493 I/Os completed (+1837) 00:11:18.969 QEMU NVMe Ctrl (12340 ): 12650 I/Os completed (+1720) 00:11:18.969 QEMU NVMe Ctrl (12341 ): 12620 I/Os completed (+1752) 00:11:18.969 00:11:19.905 QEMU NVMe Ctrl (12343 ): 24988 I/Os completed (+1924) 00:11:19.905 QEMU NVMe Ctrl (12342 ): 25485 I/Os completed (+1992) 00:11:19.905 QEMU NVMe Ctrl (12340 ): 14600 I/Os completed (+1950) 00:11:19.905 QEMU NVMe Ctrl (12341 ): 14580 I/Os completed (+1960) 00:11:19.905 00:11:20.839 QEMU NVMe Ctrl (12343 ): 26721 I/Os completed (+1733) 00:11:20.839 QEMU NVMe Ctrl (12342 ): 27334 I/Os completed (+1849) 00:11:20.839 QEMU NVMe Ctrl (12340 ): 16367 I/Os completed (+1767) 00:11:20.839 QEMU NVMe Ctrl (12341 ): 16378 I/Os completed (+1798) 00:11:20.839 00:11:22.218 QEMU NVMe Ctrl (12343 ): 28622 I/Os completed (+1901) 00:11:22.218 QEMU NVMe Ctrl (12342 ): 29313 I/Os completed (+1979) 00:11:22.218 QEMU NVMe Ctrl (12340 ): 18309 I/Os completed (+1942) 00:11:22.218 QEMU NVMe Ctrl (12341 ): 18329 I/Os completed (+1951) 00:11:22.218 00:11:23.155 QEMU NVMe Ctrl (12343 ): 30609 I/Os completed (+1987) 00:11:23.155 QEMU NVMe Ctrl (12342 ): 31355 I/Os completed (+2042) 00:11:23.155 QEMU NVMe Ctrl (12340 ): 20303 I/Os completed (+1994) 00:11:23.155 QEMU NVMe Ctrl (12341 ): 20331 I/Os completed (+2002) 00:11:23.155 00:11:24.092 QEMU NVMe Ctrl (12343 ): 32389 I/Os completed (+1780) 00:11:24.092 QEMU NVMe Ctrl (12342 ): 33269 I/Os completed (+1914) 00:11:24.092 QEMU NVMe Ctrl (12340 ): 22132 I/Os completed (+1829) 00:11:24.092 QEMU NVMe Ctrl (12341 ): 22170 I/Os completed (+1839) 00:11:24.092 00:11:24.092 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:24.092 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:24.092 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:24.092 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:24.092 [2024-05-14 02:58:09.972527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:24.092 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:24.092 [2024-05-14 02:58:09.974645] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:09.974699] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:09.974725] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:09.974751] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:24.092 [2024-05-14 02:58:09.976682] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:09.976753] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:09.976776] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:09.976793] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:24.092 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:24.092 [2024-05-14 02:58:10.001582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:24.092 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:24.092 [2024-05-14 02:58:10.003228] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:10.003284] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:10.003307] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:10.003328] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:24.092 [2024-05-14 02:58:10.004814] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:10.004859] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:10.004881] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 [2024-05-14 02:58:10.004903] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.092 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:24.092 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:24.352 Attaching to 0000:00:10.0 00:11:24.352 Attached to 0000:00:10.0 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:24.352 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:24.352 Attaching to 0000:00:11.0 00:11:24.352 Attached to 0000:00:11.0 00:11:24.920 QEMU NVMe Ctrl (12343 ): 34392 I/Os completed (+2003) 00:11:24.920 QEMU NVMe Ctrl (12342 ): 35346 I/Os completed (+2077) 00:11:24.920 QEMU NVMe Ctrl (12340 ): 1233 I/Os completed (+1233) 00:11:24.920 QEMU NVMe Ctrl (12341 ): 1052 I/Os completed (+1052) 00:11:24.920 00:11:25.857 QEMU NVMe Ctrl (12343 ): 36111 I/Os completed (+1719) 00:11:25.857 QEMU NVMe Ctrl (12342 ): 37278 I/Os completed (+1932) 00:11:25.857 QEMU NVMe Ctrl (12340 ): 2993 I/Os completed (+1760) 00:11:25.857 QEMU NVMe Ctrl (12341 ): 2818 I/Os completed (+1766) 00:11:25.857 00:11:27.236 QEMU NVMe Ctrl (12343 ): 37944 I/Os completed (+1833) 00:11:27.236 QEMU NVMe Ctrl (12342 ): 39204 I/Os completed (+1926) 00:11:27.236 QEMU NVMe Ctrl (12340 ): 4883 I/Os completed (+1890) 00:11:27.236 QEMU NVMe Ctrl (12341 ): 4692 I/Os completed (+1874) 00:11:27.236 00:11:28.171 QEMU NVMe Ctrl (12343 ): 39649 I/Os completed (+1705) 00:11:28.171 QEMU NVMe Ctrl (12342 ): 41040 I/Os completed (+1836) 00:11:28.171 QEMU NVMe Ctrl (12340 ): 6633 I/Os completed (+1750) 00:11:28.171 QEMU NVMe Ctrl (12341 ): 6428 I/Os completed (+1736) 00:11:28.171 00:11:29.106 QEMU NVMe Ctrl (12343 ): 41582 I/Os completed (+1933) 00:11:29.106 QEMU NVMe Ctrl (12342 ): 43045 I/Os completed (+2005) 00:11:29.106 QEMU NVMe Ctrl (12340 ): 8584 I/Os completed (+1951) 00:11:29.106 QEMU NVMe Ctrl (12341 ): 8395 I/Os completed (+1967) 00:11:29.106 00:11:30.041 QEMU NVMe Ctrl (12343 ): 43400 I/Os completed (+1818) 00:11:30.041 QEMU NVMe Ctrl (12342 ): 44952 I/Os completed (+1907) 00:11:30.041 QEMU NVMe Ctrl (12340 ): 10418 I/Os completed (+1834) 00:11:30.041 QEMU NVMe Ctrl (12341 ): 10264 I/Os completed (+1869) 00:11:30.041 00:11:30.977 QEMU NVMe Ctrl (12343 ): 45211 I/Os completed (+1811) 00:11:30.977 QEMU NVMe Ctrl (12342 ): 46814 I/Os completed (+1862) 00:11:30.977 QEMU NVMe Ctrl (12340 ): 12254 I/Os completed (+1836) 00:11:30.977 QEMU NVMe Ctrl (12341 ): 12082 I/Os completed (+1818) 00:11:30.977 00:11:31.914 QEMU NVMe Ctrl (12343 ): 47079 I/Os completed (+1868) 00:11:31.914 QEMU NVMe Ctrl (12342 ): 48766 I/Os completed (+1952) 00:11:31.914 QEMU NVMe Ctrl (12340 ): 14172 I/Os completed (+1918) 00:11:31.914 QEMU NVMe Ctrl (12341 ): 13998 I/Os completed (+1916) 00:11:31.914 00:11:32.850 QEMU NVMe Ctrl (12343 ): 48954 I/Os completed (+1875) 00:11:32.850 QEMU NVMe Ctrl (12342 ): 50684 I/Os completed (+1918) 00:11:32.850 QEMU NVMe Ctrl (12340 ): 16043 I/Os completed (+1871) 00:11:32.850 QEMU NVMe Ctrl (12341 ): 15880 I/Os completed (+1882) 00:11:32.850 00:11:34.228 QEMU NVMe Ctrl (12343 ): 50890 I/Os completed (+1936) 00:11:34.228 QEMU NVMe Ctrl (12342 ): 52659 I/Os completed (+1975) 00:11:34.228 QEMU NVMe Ctrl (12340 ): 17980 I/Os completed (+1937) 00:11:34.228 QEMU NVMe Ctrl (12341 ): 17829 I/Os completed (+1949) 00:11:34.228 00:11:35.202 QEMU NVMe Ctrl (12343 ): 52689 I/Os completed (+1799) 00:11:35.202 QEMU NVMe Ctrl (12342 ): 54562 I/Os completed (+1903) 00:11:35.202 QEMU NVMe Ctrl (12340 ): 19800 I/Os completed (+1820) 00:11:35.202 QEMU NVMe Ctrl (12341 ): 19648 I/Os completed (+1819) 00:11:35.202 00:11:36.139 QEMU NVMe Ctrl (12343 ): 54570 I/Os completed (+1881) 00:11:36.139 QEMU NVMe Ctrl (12342 ): 56493 I/Os completed (+1931) 00:11:36.139 QEMU NVMe Ctrl (12340 ): 21705 I/Os completed (+1905) 00:11:36.139 QEMU NVMe Ctrl (12341 ): 21555 I/Os completed (+1907) 00:11:36.139 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:36.399 [2024-05-14 02:58:22.327374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:36.399 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:36.399 [2024-05-14 02:58:22.329903] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.329978] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.330027] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.330063] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:36.399 [2024-05-14 02:58:22.332198] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.332266] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.332309] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.332346] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:36.399 EAL: Scan for (pci) bus failed. 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:36.399 [2024-05-14 02:58:22.358401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:36.399 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:36.399 [2024-05-14 02:58:22.360350] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.360420] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.360462] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.360501] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:36.399 [2024-05-14 02:58:22.362324] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.362391] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.362433] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 [2024-05-14 02:58:22.362470] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.399 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:36.399 EAL: Scan for (pci) bus failed. 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:36.399 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:36.658 Attaching to 0000:00:10.0 00:11:36.658 Attached to 0000:00:10.0 00:11:36.658 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:36.917 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:36.917 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:36.917 Attaching to 0000:00:11.0 00:11:36.917 Attached to 0000:00:11.0 00:11:36.917 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:11:36.917 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:11:36.917 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:36.917 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:36.917 [2024-05-14 02:58:22.699838] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:49.123 02:58:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:49.123 02:58:34 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:49.123 02:58:34 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.05 00:11:49.123 02:58:34 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.05 00:11:49.123 02:58:34 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.05 00:11:49.123 02:58:34 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:11:49.124 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 02:58:34 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 84565 00:11:55.688 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (84565) - No such process 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 84565 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=85108 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:55.688 02:58:40 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 85108 00:11:55.688 02:58:40 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 85108 ']' 00:11:55.688 02:58:40 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.688 02:58:40 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.688 02:58:40 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.688 02:58:40 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.688 02:58:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.688 [2024-05-14 02:58:40.814427] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:11:55.689 [2024-05-14 02:58:40.814614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85108 ] 00:11:55.689 [2024-05-14 02:58:40.963685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:55.689 [2024-05-14 02:58:40.986192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.689 [2024-05-14 02:58:41.029405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.689 02:58:41 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.689 02:58:41 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:11:55.689 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:11:55.689 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:11:55.689 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.689 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 Nvme00n1 00:11:55.947 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.947 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.948 [ 00:11:55.948 { 00:11:55.948 "name": "Nvme00n1", 00:11:55.948 "aliases": [ 00:11:55.948 "12d02efa-591f-4379-8461-538860a4e677" 00:11:55.948 ], 00:11:55.948 "product_name": "NVMe disk", 00:11:55.948 "block_size": 4096, 00:11:55.948 "num_blocks": 1548666, 00:11:55.948 "uuid": "12d02efa-591f-4379-8461-538860a4e677", 00:11:55.948 "md_size": 64, 00:11:55.948 "md_interleave": false, 00:11:55.948 "dif_type": 0, 00:11:55.948 "assigned_rate_limits": { 00:11:55.948 "rw_ios_per_sec": 0, 00:11:55.948 "rw_mbytes_per_sec": 0, 00:11:55.948 "r_mbytes_per_sec": 0, 00:11:55.948 "w_mbytes_per_sec": 0 00:11:55.948 }, 00:11:55.948 "claimed": false, 00:11:55.948 "zoned": false, 00:11:55.948 "supported_io_types": { 00:11:55.948 "read": true, 00:11:55.948 "write": true, 00:11:55.948 "unmap": true, 00:11:55.948 "write_zeroes": true, 00:11:55.948 "flush": true, 00:11:55.948 "reset": true, 00:11:55.948 "compare": true, 00:11:55.948 "compare_and_write": false, 00:11:55.948 "abort": true, 00:11:55.948 "nvme_admin": true, 00:11:55.948 "nvme_io": true 00:11:55.948 }, 00:11:55.948 "driver_specific": { 00:11:55.948 "nvme": [ 00:11:55.948 { 00:11:55.948 "pci_address": "0000:00:10.0", 00:11:55.948 "trid": { 00:11:55.948 "trtype": "PCIe", 00:11:55.948 "traddr": "0000:00:10.0" 00:11:55.948 }, 00:11:55.948 "ctrlr_data": { 00:11:55.948 "cntlid": 0, 00:11:55.948 "vendor_id": "0x1b36", 00:11:55.948 "model_number": "QEMU NVMe Ctrl", 00:11:55.948 "serial_number": "12340", 00:11:55.948 "firmware_revision": "8.0.0", 00:11:55.948 "subnqn": "nqn.2019-08.org.qemu:12340", 00:11:55.948 "oacs": { 00:11:55.948 "security": 0, 00:11:55.948 "format": 1, 00:11:55.948 "firmware": 0, 00:11:55.948 "ns_manage": 1 00:11:55.948 }, 00:11:55.948 "multi_ctrlr": false, 00:11:55.948 "ana_reporting": false 00:11:55.948 }, 00:11:55.948 "vs": { 00:11:55.948 "nvme_version": "1.4" 00:11:55.948 }, 00:11:55.948 "ns_data": { 00:11:55.948 "id": 1, 00:11:55.948 "can_share": false 00:11:55.948 } 00:11:55.948 } 00:11:55.948 ], 00:11:55.948 "mp_policy": "active_passive" 00:11:55.948 } 00:11:55.948 } 00:11:55.948 ] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.948 Nvme01n1 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.948 [ 00:11:55.948 { 00:11:55.948 "name": "Nvme01n1", 00:11:55.948 "aliases": [ 00:11:55.948 "6947492e-f2ab-40f7-9759-b009f63481c4" 00:11:55.948 ], 00:11:55.948 "product_name": "NVMe disk", 00:11:55.948 "block_size": 4096, 00:11:55.948 "num_blocks": 1310720, 00:11:55.948 "uuid": "6947492e-f2ab-40f7-9759-b009f63481c4", 00:11:55.948 "assigned_rate_limits": { 00:11:55.948 "rw_ios_per_sec": 0, 00:11:55.948 "rw_mbytes_per_sec": 0, 00:11:55.948 "r_mbytes_per_sec": 0, 00:11:55.948 "w_mbytes_per_sec": 0 00:11:55.948 }, 00:11:55.948 "claimed": false, 00:11:55.948 "zoned": false, 00:11:55.948 "supported_io_types": { 00:11:55.948 "read": true, 00:11:55.948 "write": true, 00:11:55.948 "unmap": true, 00:11:55.948 "write_zeroes": true, 00:11:55.948 "flush": true, 00:11:55.948 "reset": true, 00:11:55.948 "compare": true, 00:11:55.948 "compare_and_write": false, 00:11:55.948 "abort": true, 00:11:55.948 "nvme_admin": true, 00:11:55.948 "nvme_io": true 00:11:55.948 }, 00:11:55.948 "driver_specific": { 00:11:55.948 "nvme": [ 00:11:55.948 { 00:11:55.948 "pci_address": "0000:00:11.0", 00:11:55.948 "trid": { 00:11:55.948 "trtype": "PCIe", 00:11:55.948 "traddr": "0000:00:11.0" 00:11:55.948 }, 00:11:55.948 "ctrlr_data": { 00:11:55.948 "cntlid": 0, 00:11:55.948 "vendor_id": "0x1b36", 00:11:55.948 "model_number": "QEMU NVMe Ctrl", 00:11:55.948 "serial_number": "12341", 00:11:55.948 "firmware_revision": "8.0.0", 00:11:55.948 "subnqn": "nqn.2019-08.org.qemu:12341", 00:11:55.948 "oacs": { 00:11:55.948 "security": 0, 00:11:55.948 "format": 1, 00:11:55.948 "firmware": 0, 00:11:55.948 "ns_manage": 1 00:11:55.948 }, 00:11:55.948 "multi_ctrlr": false, 00:11:55.948 "ana_reporting": false 00:11:55.948 }, 00:11:55.948 "vs": { 00:11:55.948 "nvme_version": "1.4" 00:11:55.948 }, 00:11:55.948 "ns_data": { 00:11:55.948 "id": 1, 00:11:55.948 "can_share": false 00:11:55.948 } 00:11:55.948 } 00:11:55.948 ], 00:11:55.948 "mp_policy": "active_passive" 00:11:55.948 } 00:11:55.948 } 00:11:55.948 ] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:55.948 02:58:41 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:55.948 02:58:41 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:02.517 02:58:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:02.517 [2024-05-14 02:58:47.952000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:02.517 [2024-05-14 02:58:47.954571] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.517 [2024-05-14 02:58:47.954638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.517 [2024-05-14 02:58:47.954685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.517 [2024-05-14 02:58:47.954718] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.517 [2024-05-14 02:58:47.954744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.517 [2024-05-14 02:58:47.954797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.517 [2024-05-14 02:58:47.954845] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.517 [2024-05-14 02:58:47.954867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.517 [2024-05-14 02:58:47.954894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.517 [2024-05-14 02:58:47.954917] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.518 [2024-05-14 02:58:47.954942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.518 [2024-05-14 02:58:47.954968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.518 [2024-05-14 02:58:48.351998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:02.518 [2024-05-14 02:58:48.354298] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.518 [2024-05-14 02:58:48.354373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.518 [2024-05-14 02:58:48.354411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.518 [2024-05-14 02:58:48.354462] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.518 [2024-05-14 02:58:48.354487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.518 [2024-05-14 02:58:48.354514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.518 [2024-05-14 02:58:48.354552] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.518 [2024-05-14 02:58:48.354576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.518 [2024-05-14 02:58:48.354629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.518 [2024-05-14 02:58:48.354657] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.518 [2024-05-14 02:58:48.354680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.518 [2024-05-14 02:58:48.354704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.518 [2024-05-14 02:58:48.354731] bdev_nvme.c:5208:aer_cb: *WARNING*: AER request execute failed 00:12:02.518 [2024-05-14 02:58:48.354758] bdev_nvme.c:5208:aer_cb: *WARNING*: AER request execute failed 00:12:02.518 [2024-05-14 02:58:48.354777] bdev_nvme.c:5208:aer_cb: *WARNING*: AER request execute failed 00:12:02.518 [2024-05-14 02:58:48.354798] bdev_nvme.c:5208:aer_cb: *WARNING*: AER request execute failed 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.11 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.11 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.11 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.11 2 00:12:09.089 remove_attach_helper took 12.11s to complete (handling 2 nvme drive(s)) 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:09.089 02:58:53 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:09.089 02:58:53 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:12:14.450 02:59:00 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:14.450 02:59:00 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:14.450 02:59:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.06 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.06 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.06 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.06 2 00:12:21.016 remove_attach_helper took 12.06s to complete (handling 2 nvme drive(s)) 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:12:21.016 02:59:06 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 85108 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 85108 ']' 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 85108 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85108 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:21.016 killing process with pid 85108 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85108' 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@965 -- # kill 85108 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@970 -- # wait 85108 00:12:21.016 00:12:21.016 real 1m15.538s 00:12:21.016 user 0m43.638s 00:12:21.016 sys 0m14.824s 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.016 ************************************ 00:12:21.016 02:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.016 END TEST sw_hotplug 00:12:21.016 ************************************ 00:12:21.016 02:59:06 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:21.016 02:59:06 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:21.016 02:59:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:21.016 02:59:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.016 02:59:06 -- common/autotest_common.sh@10 -- # set +x 00:12:21.016 ************************************ 00:12:21.016 START TEST nvme_xnvme 00:12:21.016 ************************************ 00:12:21.016 02:59:06 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:21.016 * Looking for test storage... 00:12:21.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.016 02:59:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.016 02:59:06 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.016 02:59:06 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.016 02:59:06 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.016 02:59:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.016 02:59:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.016 02:59:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.016 02:59:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:21.016 02:59:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.016 02:59:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:21.016 02:59:06 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:21.016 02:59:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.016 02:59:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.016 ************************************ 00:12:21.016 START TEST xnvme_to_malloc_dd_copy 00:12:21.016 ************************************ 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:21.016 02:59:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:21.016 { 00:12:21.016 "subsystems": [ 00:12:21.016 { 00:12:21.016 "subsystem": "bdev", 00:12:21.016 "config": [ 00:12:21.016 { 00:12:21.016 "params": { 00:12:21.016 "block_size": 512, 00:12:21.016 "num_blocks": 2097152, 00:12:21.016 "name": "malloc0" 00:12:21.016 }, 00:12:21.016 "method": "bdev_malloc_create" 00:12:21.016 }, 00:12:21.016 { 00:12:21.016 "params": { 00:12:21.016 "io_mechanism": "libaio", 00:12:21.016 "filename": "/dev/nullb0", 00:12:21.016 "name": "null0" 00:12:21.016 }, 00:12:21.016 "method": "bdev_xnvme_create" 00:12:21.016 }, 00:12:21.016 { 00:12:21.016 "method": "bdev_wait_for_examine" 00:12:21.016 } 00:12:21.016 ] 00:12:21.016 } 00:12:21.016 ] 00:12:21.016 } 00:12:21.016 [2024-05-14 02:59:06.670164] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:21.016 [2024-05-14 02:59:06.670350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85454 ] 00:12:21.016 [2024-05-14 02:59:06.817898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:21.016 [2024-05-14 02:59:06.837314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.016 [2024-05-14 02:59:06.871021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.278  Copying: 178/1024 [MB] (178 MBps) Copying: 366/1024 [MB] (187 MBps) Copying: 553/1024 [MB] (186 MBps) Copying: 742/1024 [MB] (189 MBps) Copying: 928/1024 [MB] (185 MBps) Copying: 1024/1024 [MB] (average 185 MBps) 00:12:27.278 00:12:27.278 02:59:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:27.278 02:59:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:27.278 02:59:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:27.278 02:59:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:27.278 { 00:12:27.278 "subsystems": [ 00:12:27.278 { 00:12:27.278 "subsystem": "bdev", 00:12:27.278 "config": [ 00:12:27.278 { 00:12:27.278 "params": { 00:12:27.278 "block_size": 512, 00:12:27.278 "num_blocks": 2097152, 00:12:27.278 "name": "malloc0" 00:12:27.278 }, 00:12:27.278 "method": "bdev_malloc_create" 00:12:27.278 }, 00:12:27.278 { 00:12:27.278 "params": { 00:12:27.278 "io_mechanism": "libaio", 00:12:27.278 "filename": "/dev/nullb0", 00:12:27.278 "name": "null0" 00:12:27.278 }, 00:12:27.278 "method": "bdev_xnvme_create" 00:12:27.278 }, 00:12:27.278 { 00:12:27.278 "method": "bdev_wait_for_examine" 00:12:27.278 } 00:12:27.278 ] 00:12:27.278 } 00:12:27.278 ] 00:12:27.278 } 00:12:27.278 [2024-05-14 02:59:13.138734] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:27.278 [2024-05-14 02:59:13.138932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85537 ] 00:12:27.278 [2024-05-14 02:59:13.294874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:27.537 [2024-05-14 02:59:13.314178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.537 [2024-05-14 02:59:13.347579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.493  Copying: 188/1024 [MB] (188 MBps) Copying: 379/1024 [MB] (191 MBps) Copying: 568/1024 [MB] (189 MBps) Copying: 760/1024 [MB] (191 MBps) Copying: 949/1024 [MB] (188 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:12:33.493 00:12:33.494 02:59:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:33.494 02:59:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:33.494 02:59:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:33.494 02:59:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:33.494 02:59:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:33.494 02:59:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:33.494 { 00:12:33.494 "subsystems": [ 00:12:33.494 { 00:12:33.494 "subsystem": "bdev", 00:12:33.494 "config": [ 00:12:33.494 { 00:12:33.494 "params": { 00:12:33.494 "block_size": 512, 00:12:33.494 "num_blocks": 2097152, 00:12:33.494 "name": "malloc0" 00:12:33.494 }, 00:12:33.494 "method": "bdev_malloc_create" 00:12:33.494 }, 00:12:33.494 { 00:12:33.494 "params": { 00:12:33.494 "io_mechanism": "io_uring", 00:12:33.494 "filename": "/dev/nullb0", 00:12:33.494 "name": "null0" 00:12:33.494 }, 00:12:33.494 "method": "bdev_xnvme_create" 00:12:33.494 }, 00:12:33.494 { 00:12:33.494 "method": "bdev_wait_for_examine" 00:12:33.494 } 00:12:33.494 ] 00:12:33.494 } 00:12:33.494 ] 00:12:33.494 } 00:12:33.494 [2024-05-14 02:59:19.486521] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:33.494 [2024-05-14 02:59:19.486699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85611 ] 00:12:33.753 [2024-05-14 02:59:19.633751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:33.753 [2024-05-14 02:59:19.653655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.753 [2024-05-14 02:59:19.687211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.311  Copying: 202/1024 [MB] (202 MBps) Copying: 407/1024 [MB] (205 MBps) Copying: 610/1024 [MB] (202 MBps) Copying: 815/1024 [MB] (205 MBps) Copying: 1017/1024 [MB] (201 MBps) Copying: 1024/1024 [MB] (average 203 MBps) 00:12:39.311 00:12:39.311 02:59:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:39.311 02:59:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:39.311 02:59:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:39.311 02:59:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:39.570 { 00:12:39.570 "subsystems": [ 00:12:39.570 { 00:12:39.570 "subsystem": "bdev", 00:12:39.570 "config": [ 00:12:39.570 { 00:12:39.570 "params": { 00:12:39.570 "block_size": 512, 00:12:39.570 "num_blocks": 2097152, 00:12:39.570 "name": "malloc0" 00:12:39.570 }, 00:12:39.570 "method": "bdev_malloc_create" 00:12:39.570 }, 00:12:39.570 { 00:12:39.570 "params": { 00:12:39.570 "io_mechanism": "io_uring", 00:12:39.570 "filename": "/dev/nullb0", 00:12:39.570 "name": "null0" 00:12:39.570 }, 00:12:39.570 "method": "bdev_xnvme_create" 00:12:39.570 }, 00:12:39.570 { 00:12:39.570 "method": "bdev_wait_for_examine" 00:12:39.570 } 00:12:39.570 ] 00:12:39.570 } 00:12:39.570 ] 00:12:39.570 } 00:12:39.570 [2024-05-14 02:59:25.420888] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:39.570 [2024-05-14 02:59:25.421086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85686 ] 00:12:39.570 [2024-05-14 02:59:25.568898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:39.570 [2024-05-14 02:59:25.589658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.829 [2024-05-14 02:59:25.624517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.782  Copying: 210/1024 [MB] (210 MBps) Copying: 403/1024 [MB] (192 MBps) Copying: 585/1024 [MB] (182 MBps) Copying: 769/1024 [MB] (183 MBps) Copying: 951/1024 [MB] (182 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:12:45.782 00:12:45.782 02:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:45.782 02:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:45.782 00:12:45.782 real 0m25.144s 00:12:45.782 user 0m20.155s 00:12:45.782 sys 0m4.498s 00:12:45.782 02:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.782 02:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:45.782 ************************************ 00:12:45.782 END TEST xnvme_to_malloc_dd_copy 00:12:45.782 ************************************ 00:12:45.782 02:59:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:45.782 02:59:31 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:45.782 02:59:31 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.782 02:59:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.782 ************************************ 00:12:45.782 START TEST xnvme_bdevperf 00:12:45.782 ************************************ 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:45.782 02:59:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.041 { 00:12:46.041 "subsystems": [ 00:12:46.041 { 00:12:46.041 "subsystem": "bdev", 00:12:46.041 "config": [ 00:12:46.041 { 00:12:46.041 "params": { 00:12:46.041 "io_mechanism": "libaio", 00:12:46.041 "filename": "/dev/nullb0", 00:12:46.041 "name": "null0" 00:12:46.041 }, 00:12:46.041 "method": "bdev_xnvme_create" 00:12:46.041 }, 00:12:46.041 { 00:12:46.041 "method": "bdev_wait_for_examine" 00:12:46.042 } 00:12:46.042 ] 00:12:46.042 } 00:12:46.042 ] 00:12:46.042 } 00:12:46.042 [2024-05-14 02:59:31.903688] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:46.042 [2024-05-14 02:59:31.903922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85782 ] 00:12:46.042 [2024-05-14 02:59:32.066005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:46.301 [2024-05-14 02:59:32.084982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.301 [2024-05-14 02:59:32.123917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.301 Running I/O for 5 seconds... 00:12:51.603 00:12:51.603 Latency(us) 00:12:51.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.603 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.603 null0 : 5.00 108158.37 422.49 0.00 0.00 587.91 154.53 975.59 00:12:51.603 =================================================================================================================== 00:12:51.603 Total : 108158.37 422.49 0.00 0.00 587.91 154.53 975.59 00:12:51.603 02:59:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:51.603 02:59:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:51.603 02:59:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:51.603 02:59:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:51.603 02:59:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:51.603 02:59:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 { 00:12:51.603 "subsystems": [ 00:12:51.603 { 00:12:51.603 "subsystem": "bdev", 00:12:51.603 "config": [ 00:12:51.603 { 00:12:51.603 "params": { 00:12:51.603 "io_mechanism": "io_uring", 00:12:51.603 "filename": "/dev/nullb0", 00:12:51.603 "name": "null0" 00:12:51.603 }, 00:12:51.603 "method": "bdev_xnvme_create" 00:12:51.603 }, 00:12:51.603 { 00:12:51.603 "method": "bdev_wait_for_examine" 00:12:51.603 } 00:12:51.603 ] 00:12:51.603 } 00:12:51.603 ] 00:12:51.603 } 00:12:51.603 [2024-05-14 02:59:37.543433] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:51.603 [2024-05-14 02:59:37.543636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85851 ] 00:12:51.862 [2024-05-14 02:59:37.692917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.862 [2024-05-14 02:59:37.710498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.862 [2024-05-14 02:59:37.747913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.862 Running I/O for 5 seconds... 00:12:57.136 00:12:57.136 Latency(us) 00:12:57.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.136 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:57.136 null0 : 5.00 140264.13 547.91 0.00 0.00 452.57 228.07 726.11 00:12:57.136 =================================================================================================================== 00:12:57.136 Total : 140264.13 547.91 0.00 0.00 452.57 228.07 726.11 00:12:57.136 02:59:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:57.136 02:59:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:57.136 00:12:57.136 real 0m11.296s 00:12:57.136 user 0m8.318s 00:12:57.136 sys 0m2.747s 00:12:57.136 02:59:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.136 02:59:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.136 ************************************ 00:12:57.136 END TEST xnvme_bdevperf 00:12:57.136 ************************************ 00:12:57.136 00:12:57.136 real 0m36.637s 00:12:57.136 user 0m28.543s 00:12:57.136 sys 0m7.363s 00:12:57.136 02:59:43 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.136 02:59:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.136 ************************************ 00:12:57.136 END TEST nvme_xnvme 00:12:57.136 ************************************ 00:12:57.137 02:59:43 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.137 02:59:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:57.137 02:59:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.137 02:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:57.137 ************************************ 00:12:57.137 START TEST blockdev_xnvme 00:12:57.137 ************************************ 00:12:57.137 02:59:43 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.395 * Looking for test storage... 00:12:57.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85974 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85974 00:12:57.395 02:59:43 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:57.395 02:59:43 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 85974 ']' 00:12:57.395 02:59:43 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.395 02:59:43 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.395 02:59:43 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.395 02:59:43 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.395 02:59:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.395 [2024-05-14 02:59:43.366221] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:12:57.395 [2024-05-14 02:59:43.366396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85974 ] 00:12:57.654 [2024-05-14 02:59:43.514986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:57.654 [2024-05-14 02:59:43.538346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.654 [2024-05-14 02:59:43.583371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.587 02:59:44 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:58.587 02:59:44 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:12:58.587 02:59:44 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:12:58.587 02:59:44 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:12:58.587 02:59:44 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:58.587 02:59:44 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:58.587 02:59:44 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:58.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:58.844 Waiting for block devices as requested 00:12:58.844 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.844 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.207 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.207 02:59:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.207 02:59:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:04.207 nvme0n1 00:13:04.207 nvme0n2 00:13:04.207 nvme0n3 00:13:04.207 nvme1n1 00:13:04.207 nvme2n1 00:13:04.207 nvme3n1 00:13:04.207 02:59:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.207 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:04.207 02:59:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.207 02:59:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.207 02:59:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.207 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:13:04.207 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:04.207 02:59:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ad21e06e-6a9b-497a-96bc-0995d9955ce8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad21e06e-6a9b-497a-96bc-0995d9955ce8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "6ed12313-87d6-4cef-8df6-35f07619a129"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ed12313-87d6-4cef-8df6-35f07619a129",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "970e43c4-0d6f-4584-9a87-61f6faa9b520"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "970e43c4-0d6f-4584-9a87-61f6faa9b520",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b9279891-39a4-43c0-9179-a20f11a3d043"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b9279891-39a4-43c0-9179-a20f11a3d043",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8c6c8998-b8e8-4f82-9ae3-34ce29bdcfed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8c6c8998-b8e8-4f82-9ae3-34ce29bdcfed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9b6502a7-e935-4661-8c94-fefb194caff7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9b6502a7-e935-4661-8c94-fefb194caff7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:04.208 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 85974 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 85974 ']' 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 85974 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.208 02:59:50 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85974 00:13:04.466 02:59:50 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.466 02:59:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.466 killing process with pid 85974 00:13:04.466 02:59:50 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85974' 00:13:04.466 02:59:50 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 85974 00:13:04.466 02:59:50 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 85974 00:13:04.725 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:04.725 02:59:50 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:04.725 02:59:50 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:04.725 02:59:50 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.725 02:59:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.725 ************************************ 00:13:04.725 START TEST bdev_hello_world 00:13:04.725 ************************************ 00:13:04.725 02:59:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:04.725 [2024-05-14 02:59:50.632471] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:04.725 [2024-05-14 02:59:50.632619] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86239 ] 00:13:04.983 [2024-05-14 02:59:50.770249] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:04.983 [2024-05-14 02:59:50.790563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.983 [2024-05-14 02:59:50.826092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.983 [2024-05-14 02:59:50.987940] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:04.983 [2024-05-14 02:59:50.987997] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:04.983 [2024-05-14 02:59:50.988044] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:04.983 [2024-05-14 02:59:50.990187] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:04.983 [2024-05-14 02:59:50.990602] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:04.983 [2024-05-14 02:59:50.990637] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:04.983 [2024-05-14 02:59:50.990872] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:04.983 00:13:04.983 [2024-05-14 02:59:50.990909] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:05.242 00:13:05.242 real 0m0.610s 00:13:05.242 user 0m0.338s 00:13:05.242 sys 0m0.162s 00:13:05.242 02:59:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.242 02:59:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 ************************************ 00:13:05.242 END TEST bdev_hello_world 00:13:05.242 ************************************ 00:13:05.242 02:59:51 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:05.242 02:59:51 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:05.242 02:59:51 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.242 02:59:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 ************************************ 00:13:05.242 START TEST bdev_bounds 00:13:05.242 ************************************ 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=86269 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 86269' 00:13:05.242 Process bdevio pid: 86269 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 86269 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 86269 ']' 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.242 02:59:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:05.500 [2024-05-14 02:59:51.304710] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:05.500 [2024-05-14 02:59:51.304922] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86269 ] 00:13:05.500 [2024-05-14 02:59:51.453964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:05.500 [2024-05-14 02:59:51.471892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.500 [2024-05-14 02:59:51.509941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.500 [2024-05-14 02:59:51.510027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.500 [2024-05-14 02:59:51.510096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.436 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:06.436 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:13:06.436 02:59:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:06.436 I/O targets: 00:13:06.436 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:06.436 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:06.436 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:06.436 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:06.436 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:06.436 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:06.436 00:13:06.436 00:13:06.436 CUnit - A unit testing framework for C - Version 2.1-3 00:13:06.436 http://cunit.sourceforge.net/ 00:13:06.436 00:13:06.436 00:13:06.436 Suite: bdevio tests on: nvme3n1 00:13:06.436 Test: blockdev write read block ...passed 00:13:06.436 Test: blockdev write zeroes read block ...passed 00:13:06.436 Test: blockdev write zeroes read no split ...passed 00:13:06.436 Test: blockdev write zeroes read split ...passed 00:13:06.436 Test: blockdev write zeroes read split partial ...passed 00:13:06.436 Test: blockdev reset ...passed 00:13:06.436 Test: blockdev write read 8 blocks ...passed 00:13:06.436 Test: blockdev write read size > 128k ...passed 00:13:06.436 Test: blockdev write read invalid size ...passed 00:13:06.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.436 Test: blockdev write read max offset ...passed 00:13:06.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.436 Test: blockdev writev readv 8 blocks ...passed 00:13:06.436 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.436 Test: blockdev writev readv block ...passed 00:13:06.436 Test: blockdev writev readv size > 128k ...passed 00:13:06.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.436 Test: blockdev comparev and writev ...passed 00:13:06.436 Test: blockdev nvme passthru rw ...passed 00:13:06.436 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.436 Test: blockdev nvme admin passthru ...passed 00:13:06.436 Test: blockdev copy ...passed 00:13:06.436 Suite: bdevio tests on: nvme2n1 00:13:06.436 Test: blockdev write read block ...passed 00:13:06.436 Test: blockdev write zeroes read block ...passed 00:13:06.436 Test: blockdev write zeroes read no split ...passed 00:13:06.436 Test: blockdev write zeroes read split ...passed 00:13:06.436 Test: blockdev write zeroes read split partial ...passed 00:13:06.436 Test: blockdev reset ...passed 00:13:06.436 Test: blockdev write read 8 blocks ...passed 00:13:06.436 Test: blockdev write read size > 128k ...passed 00:13:06.436 Test: blockdev write read invalid size ...passed 00:13:06.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.436 Test: blockdev write read max offset ...passed 00:13:06.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.436 Test: blockdev writev readv 8 blocks ...passed 00:13:06.436 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.436 Test: blockdev writev readv block ...passed 00:13:06.436 Test: blockdev writev readv size > 128k ...passed 00:13:06.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.436 Test: blockdev comparev and writev ...passed 00:13:06.436 Test: blockdev nvme passthru rw ...passed 00:13:06.436 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.436 Test: blockdev nvme admin passthru ...passed 00:13:06.436 Test: blockdev copy ...passed 00:13:06.436 Suite: bdevio tests on: nvme1n1 00:13:06.436 Test: blockdev write read block ...passed 00:13:06.436 Test: blockdev write zeroes read block ...passed 00:13:06.436 Test: blockdev write zeroes read no split ...passed 00:13:06.436 Test: blockdev write zeroes read split ...passed 00:13:06.436 Test: blockdev write zeroes read split partial ...passed 00:13:06.436 Test: blockdev reset ...passed 00:13:06.436 Test: blockdev write read 8 blocks ...passed 00:13:06.436 Test: blockdev write read size > 128k ...passed 00:13:06.436 Test: blockdev write read invalid size ...passed 00:13:06.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.436 Test: blockdev write read max offset ...passed 00:13:06.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.436 Test: blockdev writev readv 8 blocks ...passed 00:13:06.436 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.436 Test: blockdev writev readv block ...passed 00:13:06.436 Test: blockdev writev readv size > 128k ...passed 00:13:06.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.436 Test: blockdev comparev and writev ...passed 00:13:06.436 Test: blockdev nvme passthru rw ...passed 00:13:06.436 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.436 Test: blockdev nvme admin passthru ...passed 00:13:06.436 Test: blockdev copy ...passed 00:13:06.436 Suite: bdevio tests on: nvme0n3 00:13:06.436 Test: blockdev write read block ...passed 00:13:06.436 Test: blockdev write zeroes read block ...passed 00:13:06.436 Test: blockdev write zeroes read no split ...passed 00:13:06.436 Test: blockdev write zeroes read split ...passed 00:13:06.436 Test: blockdev write zeroes read split partial ...passed 00:13:06.436 Test: blockdev reset ...passed 00:13:06.436 Test: blockdev write read 8 blocks ...passed 00:13:06.436 Test: blockdev write read size > 128k ...passed 00:13:06.436 Test: blockdev write read invalid size ...passed 00:13:06.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.436 Test: blockdev write read max offset ...passed 00:13:06.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.436 Test: blockdev writev readv 8 blocks ...passed 00:13:06.436 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.436 Test: blockdev writev readv block ...passed 00:13:06.436 Test: blockdev writev readv size > 128k ...passed 00:13:06.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.436 Test: blockdev comparev and writev ...passed 00:13:06.436 Test: blockdev nvme passthru rw ...passed 00:13:06.436 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.436 Test: blockdev nvme admin passthru ...passed 00:13:06.436 Test: blockdev copy ...passed 00:13:06.436 Suite: bdevio tests on: nvme0n2 00:13:06.436 Test: blockdev write read block ...passed 00:13:06.436 Test: blockdev write zeroes read block ...passed 00:13:06.436 Test: blockdev write zeroes read no split ...passed 00:13:06.436 Test: blockdev write zeroes read split ...passed 00:13:06.436 Test: blockdev write zeroes read split partial ...passed 00:13:06.436 Test: blockdev reset ...passed 00:13:06.436 Test: blockdev write read 8 blocks ...passed 00:13:06.436 Test: blockdev write read size > 128k ...passed 00:13:06.436 Test: blockdev write read invalid size ...passed 00:13:06.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.436 Test: blockdev write read max offset ...passed 00:13:06.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.436 Test: blockdev writev readv 8 blocks ...passed 00:13:06.436 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.436 Test: blockdev writev readv block ...passed 00:13:06.436 Test: blockdev writev readv size > 128k ...passed 00:13:06.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.436 Test: blockdev comparev and writev ...passed 00:13:06.436 Test: blockdev nvme passthru rw ...passed 00:13:06.436 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.436 Test: blockdev nvme admin passthru ...passed 00:13:06.436 Test: blockdev copy ...passed 00:13:06.436 Suite: bdevio tests on: nvme0n1 00:13:06.436 Test: blockdev write read block ...passed 00:13:06.436 Test: blockdev write zeroes read block ...passed 00:13:06.436 Test: blockdev write zeroes read no split ...passed 00:13:06.436 Test: blockdev write zeroes read split ...passed 00:13:06.436 Test: blockdev write zeroes read split partial ...passed 00:13:06.436 Test: blockdev reset ...passed 00:13:06.436 Test: blockdev write read 8 blocks ...passed 00:13:06.436 Test: blockdev write read size > 128k ...passed 00:13:06.436 Test: blockdev write read invalid size ...passed 00:13:06.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.436 Test: blockdev write read max offset ...passed 00:13:06.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.436 Test: blockdev writev readv 8 blocks ...passed 00:13:06.436 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.436 Test: blockdev writev readv block ...passed 00:13:06.436 Test: blockdev writev readv size > 128k ...passed 00:13:06.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.436 Test: blockdev comparev and writev ...passed 00:13:06.437 Test: blockdev nvme passthru rw ...passed 00:13:06.437 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.437 Test: blockdev nvme admin passthru ...passed 00:13:06.437 Test: blockdev copy ...passed 00:13:06.437 00:13:06.437 Run Summary: Type Total Ran Passed Failed Inactive 00:13:06.437 suites 6 6 n/a 0 0 00:13:06.437 tests 138 138 138 0 0 00:13:06.437 asserts 780 780 780 0 n/a 00:13:06.437 00:13:06.437 Elapsed time = 0.288 seconds 00:13:06.437 0 00:13:06.437 02:59:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 86269 00:13:06.437 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 86269 ']' 00:13:06.437 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 86269 00:13:06.437 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86269 00:13:06.695 killing process with pid 86269 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86269' 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 86269 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 86269 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:06.695 00:13:06.695 real 0m1.453s 00:13:06.695 user 0m3.643s 00:13:06.695 sys 0m0.298s 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.695 02:59:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:06.695 ************************************ 00:13:06.695 END TEST bdev_bounds 00:13:06.695 ************************************ 00:13:06.695 02:59:52 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:13:06.695 02:59:52 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:06.695 02:59:52 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.695 02:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.695 ************************************ 00:13:06.695 START TEST bdev_nbd 00:13:06.695 ************************************ 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:06.695 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=86321 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 86321 /var/tmp/spdk-nbd.sock 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 86321 ']' 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:06.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:06.954 02:59:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:06.954 [2024-05-14 02:59:52.799393] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:06.954 [2024-05-14 02:59:52.799969] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.954 [2024-05-14 02:59:52.941331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:06.954 [2024-05-14 02:59:52.962980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.212 [2024-05-14 02:59:53.001942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.777 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:08.035 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:08.035 02:59:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.035 1+0 records in 00:13:08.035 1+0 records out 00:13:08.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496116 s, 8.3 MB/s 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.035 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.036 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.294 1+0 records in 00:13:08.294 1+0 records out 00:13:08.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466998 s, 8.8 MB/s 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.294 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:08.552 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.811 1+0 records in 00:13:08.811 1+0 records out 00:13:08.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615905 s, 6.7 MB/s 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:08.811 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.069 1+0 records in 00:13:09.069 1+0 records out 00:13:09.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684174 s, 6.0 MB/s 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.069 02:59:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.328 1+0 records in 00:13:09.328 1+0 records out 00:13:09.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755408 s, 5.4 MB/s 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.328 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.587 1+0 records in 00:13:09.587 1+0 records out 00:13:09.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766724 s, 5.3 MB/s 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.587 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.845 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd0", 00:13:09.846 "bdev_name": "nvme0n1" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd1", 00:13:09.846 "bdev_name": "nvme0n2" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd2", 00:13:09.846 "bdev_name": "nvme0n3" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd3", 00:13:09.846 "bdev_name": "nvme1n1" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd4", 00:13:09.846 "bdev_name": "nvme2n1" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd5", 00:13:09.846 "bdev_name": "nvme3n1" 00:13:09.846 } 00:13:09.846 ]' 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd0", 00:13:09.846 "bdev_name": "nvme0n1" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd1", 00:13:09.846 "bdev_name": "nvme0n2" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd2", 00:13:09.846 "bdev_name": "nvme0n3" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd3", 00:13:09.846 "bdev_name": "nvme1n1" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd4", 00:13:09.846 "bdev_name": "nvme2n1" 00:13:09.846 }, 00:13:09.846 { 00:13:09.846 "nbd_device": "/dev/nbd5", 00:13:09.846 "bdev_name": "nvme3n1" 00:13:09.846 } 00:13:09.846 ]' 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.846 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.104 02:59:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.363 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.621 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.968 02:59:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.226 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:11.484 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.485 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:11.743 /dev/nbd0 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:12.001 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.002 1+0 records in 00:13:12.002 1+0 records out 00:13:12.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560407 s, 7.3 MB/s 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.002 02:59:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:13:12.002 /dev/nbd1 00:13:12.002 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.260 1+0 records in 00:13:12.260 1+0 records out 00:13:12.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640029 s, 6.4 MB/s 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:12.260 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.261 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:12.261 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:12.261 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.261 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.261 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:13:12.261 /dev/nbd10 00:13:12.261 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.519 1+0 records in 00:13:12.519 1+0 records out 00:13:12.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668374 s, 6.1 MB/s 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:13:12.519 /dev/nbd11 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.519 1+0 records in 00:13:12.519 1+0 records out 00:13:12.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081648 s, 5.0 MB/s 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:12.519 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:12.520 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.520 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.520 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:12.778 /dev/nbd12 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.778 1+0 records in 00:13:12.778 1+0 records out 00:13:12.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787784 s, 5.2 MB/s 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.778 02:59:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:13.037 /dev/nbd13 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.295 1+0 records in 00:13:13.295 1+0 records out 00:13:13.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602363 s, 6.8 MB/s 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd0", 00:13:13.295 "bdev_name": "nvme0n1" 00:13:13.295 }, 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd1", 00:13:13.295 "bdev_name": "nvme0n2" 00:13:13.295 }, 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd10", 00:13:13.295 "bdev_name": "nvme0n3" 00:13:13.295 }, 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd11", 00:13:13.295 "bdev_name": "nvme1n1" 00:13:13.295 }, 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd12", 00:13:13.295 "bdev_name": "nvme2n1" 00:13:13.295 }, 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd13", 00:13:13.295 "bdev_name": "nvme3n1" 00:13:13.295 } 00:13:13.295 ]' 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:13.295 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd0", 00:13:13.295 "bdev_name": "nvme0n1" 00:13:13.295 }, 00:13:13.295 { 00:13:13.295 "nbd_device": "/dev/nbd1", 00:13:13.295 "bdev_name": "nvme0n2" 00:13:13.296 }, 00:13:13.296 { 00:13:13.296 "nbd_device": "/dev/nbd10", 00:13:13.296 "bdev_name": "nvme0n3" 00:13:13.296 }, 00:13:13.296 { 00:13:13.296 "nbd_device": "/dev/nbd11", 00:13:13.296 "bdev_name": "nvme1n1" 00:13:13.296 }, 00:13:13.296 { 00:13:13.296 "nbd_device": "/dev/nbd12", 00:13:13.296 "bdev_name": "nvme2n1" 00:13:13.296 }, 00:13:13.296 { 00:13:13.296 "nbd_device": "/dev/nbd13", 00:13:13.296 "bdev_name": "nvme3n1" 00:13:13.296 } 00:13:13.296 ]' 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:13.554 /dev/nbd1 00:13:13.554 /dev/nbd10 00:13:13.554 /dev/nbd11 00:13:13.554 /dev/nbd12 00:13:13.554 /dev/nbd13' 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:13.554 /dev/nbd1 00:13:13.554 /dev/nbd10 00:13:13.554 /dev/nbd11 00:13:13.554 /dev/nbd12 00:13:13.554 /dev/nbd13' 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:13.554 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:13.555 256+0 records in 00:13:13.555 256+0 records out 00:13:13.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00811419 s, 129 MB/s 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:13.555 256+0 records in 00:13:13.555 256+0 records out 00:13:13.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162807 s, 6.4 MB/s 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.555 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:13.813 256+0 records in 00:13:13.813 256+0 records out 00:13:13.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170589 s, 6.1 MB/s 00:13:13.813 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.813 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:14.071 256+0 records in 00:13:14.071 256+0 records out 00:13:14.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169351 s, 6.2 MB/s 00:13:14.071 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.071 02:59:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:14.071 256+0 records in 00:13:14.071 256+0 records out 00:13:14.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14784 s, 7.1 MB/s 00:13:14.071 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.071 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:14.329 256+0 records in 00:13:14.329 256+0 records out 00:13:14.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173234 s, 6.1 MB/s 00:13:14.330 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.330 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:14.588 256+0 records in 00:13:14.588 256+0 records out 00:13:14.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169061 s, 6.2 MB/s 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.588 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.589 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.847 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.104 03:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.361 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.616 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.873 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:16.130 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:16.130 03:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.130 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:16.388 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:16.646 malloc_lvol_verify 00:13:16.646 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:16.904 cd202485-ad38-46ca-a2fb-e1bd0fec8695 00:13:16.904 03:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:17.162 d7933d3f-e907-439b-ab54-cfba3844bb4a 00:13:17.162 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:17.424 /dev/nbd0 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:17.424 mke2fs 1.46.5 (30-Dec-2021) 00:13:17.424 Discarding device blocks: 0/4096 done 00:13:17.424 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:17.424 00:13:17.424 Allocating group tables: 0/1 done 00:13:17.424 Writing inode tables: 0/1 done 00:13:17.424 Creating journal (1024 blocks): done 00:13:17.424 Writing superblocks and filesystem accounting information: 0/1 done 00:13:17.424 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.424 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:17.693 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.693 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.693 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.693 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 86321 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 86321 ']' 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 86321 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86321 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:17.694 killing process with pid 86321 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86321' 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 86321 00:13:17.694 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 86321 00:13:17.953 03:00:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:17.953 00:13:17.953 real 0m11.051s 00:13:17.953 user 0m15.871s 00:13:17.953 sys 0m3.774s 00:13:17.953 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.953 03:00:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:17.953 ************************************ 00:13:17.953 END TEST bdev_nbd 00:13:17.953 ************************************ 00:13:17.953 03:00:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:17.953 03:00:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:13:17.953 03:00:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:13:17.953 03:00:03 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:17.953 03:00:03 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:17.953 03:00:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.953 03:00:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.953 ************************************ 00:13:17.953 START TEST bdev_fio 00:13:17.953 ************************************ 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:17.953 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:17.953 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:17.954 ************************************ 00:13:17.954 START TEST bdev_fio_rw_verify 00:13:17.954 ************************************ 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:17.954 03:00:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:18.212 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:18.212 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:18.212 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:18.212 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:18.212 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:18.212 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:18.212 fio-3.35 00:13:18.212 Starting 6 threads 00:13:30.408 00:13:30.408 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=86718: Tue May 14 03:00:14 2024 00:13:30.408 read: IOPS=29.4k, BW=115MiB/s (120MB/s)(1149MiB/10001msec) 00:13:30.408 slat (usec): min=3, max=877, avg= 6.70, stdev= 4.02 00:13:30.408 clat (usec): min=82, max=4149, avg=630.05, stdev=202.47 00:13:30.408 lat (usec): min=89, max=4160, avg=636.75, stdev=203.14 00:13:30.408 clat percentiles (usec): 00:13:30.408 | 50.000th=[ 668], 99.000th=[ 1090], 99.900th=[ 1614], 99.990th=[ 2638], 00:13:30.408 | 99.999th=[ 4146] 00:13:30.408 write: IOPS=29.8k, BW=116MiB/s (122MB/s)(1163MiB/10001msec); 0 zone resets 00:13:30.408 slat (usec): min=7, max=2169, avg=25.79, stdev=25.01 00:13:30.408 clat (usec): min=98, max=5724, avg=719.86, stdev=224.00 00:13:30.408 lat (usec): min=118, max=5771, avg=745.65, stdev=225.89 00:13:30.408 clat percentiles (usec): 00:13:30.408 | 50.000th=[ 734], 99.000th=[ 1352], 99.900th=[ 2073], 99.990th=[ 4047], 00:13:30.408 | 99.999th=[ 5669] 00:13:30.408 bw ( KiB/s): min=98295, max=144496, per=100.00%, avg=119148.68, stdev=2395.26, samples=114 00:13:30.408 iops : min=24573, max=36124, avg=29787.00, stdev=598.82, samples=114 00:13:30.408 lat (usec) : 100=0.01%, 250=2.79%, 500=17.05%, 750=43.44%, 1000=32.52% 00:13:30.408 lat (msec) : 2=4.12%, 4=0.08%, 10=0.01% 00:13:30.408 cpu : usr=61.42%, sys=25.49%, ctx=8065, majf=0, minf=25068 00:13:30.408 IO depths : 1=11.9%, 2=24.3%, 4=50.6%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.408 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.408 issued rwts: total=294055,297781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:30.408 00:13:30.408 Run status group 0 (all jobs): 00:13:30.408 READ: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1149MiB (1204MB), run=10001-10001msec 00:13:30.408 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1163MiB (1220MB), run=10001-10001msec 00:13:30.408 ----------------------------------------------------- 00:13:30.408 Suppressions used: 00:13:30.408 count bytes template 00:13:30.408 6 48 /usr/src/fio/parse.c 00:13:30.408 3532 339072 /usr/src/fio/iolog.c 00:13:30.408 1 8 libtcmalloc_minimal.so 00:13:30.408 1 904 libcrypto.so 00:13:30.408 ----------------------------------------------------- 00:13:30.408 00:13:30.408 00:13:30.408 real 0m11.139s 00:13:30.408 user 0m37.564s 00:13:30.408 sys 0m15.611s 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.408 ************************************ 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:30.408 END TEST bdev_fio_rw_verify 00:13:30.408 ************************************ 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ad21e06e-6a9b-497a-96bc-0995d9955ce8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad21e06e-6a9b-497a-96bc-0995d9955ce8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "6ed12313-87d6-4cef-8df6-35f07619a129"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ed12313-87d6-4cef-8df6-35f07619a129",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "970e43c4-0d6f-4584-9a87-61f6faa9b520"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "970e43c4-0d6f-4584-9a87-61f6faa9b520",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b9279891-39a4-43c0-9179-a20f11a3d043"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b9279891-39a4-43c0-9179-a20f11a3d043",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8c6c8998-b8e8-4f82-9ae3-34ce29bdcfed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8c6c8998-b8e8-4f82-9ae3-34ce29bdcfed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9b6502a7-e935-4661-8c94-fefb194caff7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9b6502a7-e935-4661-8c94-fefb194caff7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:30.408 /home/vagrant/spdk_repo/spdk 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:13:30.408 00:13:30.408 real 0m11.316s 00:13:30.408 user 0m37.665s 00:13:30.408 sys 0m15.685s 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.408 ************************************ 00:13:30.408 END TEST bdev_fio 00:13:30.408 ************************************ 00:13:30.408 03:00:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:30.408 03:00:15 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:30.408 03:00:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:30.408 03:00:15 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:30.408 03:00:15 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.408 03:00:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.408 ************************************ 00:13:30.408 START TEST bdev_verify 00:13:30.408 ************************************ 00:13:30.408 03:00:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:30.408 [2024-05-14 03:00:15.277537] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:30.408 [2024-05-14 03:00:15.277736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86888 ] 00:13:30.408 [2024-05-14 03:00:15.427798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:30.408 [2024-05-14 03:00:15.451128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.408 [2024-05-14 03:00:15.495300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.408 [2024-05-14 03:00:15.495353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.408 Running I/O for 5 seconds... 00:13:35.736 00:13:35.736 Latency(us) 00:13:35.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.736 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x0 length 0x80000 00:13:35.736 nvme0n1 : 5.04 1575.25 6.15 0.00 0.00 81107.08 11498.59 79119.83 00:13:35.736 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x80000 length 0x80000 00:13:35.736 nvme0n1 : 5.05 1521.83 5.94 0.00 0.00 83947.61 13762.56 79119.83 00:13:35.736 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x0 length 0x80000 00:13:35.736 nvme0n2 : 5.04 1574.62 6.15 0.00 0.00 80997.15 18469.24 67680.81 00:13:35.736 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x80000 length 0x80000 00:13:35.736 nvme0n2 : 5.06 1518.21 5.93 0.00 0.00 83974.56 18826.71 70540.57 00:13:35.736 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x0 length 0x80000 00:13:35.736 nvme0n3 : 5.06 1593.92 6.23 0.00 0.00 79873.97 11319.85 69110.69 00:13:35.736 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x80000 length 0x80000 00:13:35.736 nvme0n3 : 5.06 1517.36 5.93 0.00 0.00 83840.27 17396.83 68157.44 00:13:35.736 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.736 Verification LBA range: start 0x0 length 0x20000 00:13:35.736 nvme1n1 : 5.04 1573.96 6.15 0.00 0.00 80739.92 10783.65 74353.57 00:13:35.737 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.737 Verification LBA range: start 0x20000 length 0x20000 00:13:35.737 nvme1n1 : 5.05 1521.10 5.94 0.00 0.00 83457.40 13822.14 75783.45 00:13:35.737 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.737 Verification LBA range: start 0x0 length 0xbd0bd 00:13:35.737 nvme2n1 : 5.05 2725.47 10.65 0.00 0.00 46510.63 4319.42 65297.69 00:13:35.737 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.737 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:35.737 nvme2n1 : 5.07 2706.88 10.57 0.00 0.00 46746.71 4468.36 73400.32 00:13:35.737 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.737 Verification LBA range: start 0x0 length 0xa0000 00:13:35.737 nvme3n1 : 5.06 1594.53 6.23 0.00 0.00 79351.93 6464.23 78166.57 00:13:35.737 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.737 Verification LBA range: start 0xa0000 length 0xa0000 00:13:35.737 nvme3n1 : 5.07 1540.38 6.02 0.00 0.00 82081.35 7387.69 78166.57 00:13:35.737 =================================================================================================================== 00:13:35.737 Total : 20963.52 81.89 0.00 0.00 72752.88 4319.42 79119.83 00:13:35.737 00:13:35.737 real 0m5.827s 00:13:35.737 user 0m9.123s 00:13:35.737 sys 0m1.600s 00:13:35.737 03:00:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.737 ************************************ 00:13:35.737 END TEST bdev_verify 00:13:35.737 ************************************ 00:13:35.737 03:00:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:35.737 03:00:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:35.737 03:00:21 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:35.737 03:00:21 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.737 03:00:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.737 ************************************ 00:13:35.737 START TEST bdev_verify_big_io 00:13:35.737 ************************************ 00:13:35.737 03:00:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:35.737 [2024-05-14 03:00:21.158460] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:35.737 [2024-05-14 03:00:21.158653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86971 ] 00:13:35.737 [2024-05-14 03:00:21.308365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:35.737 [2024-05-14 03:00:21.333058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:35.737 [2024-05-14 03:00:21.377110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.737 [2024-05-14 03:00:21.377190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.737 Running I/O for 5 seconds... 00:13:42.294 00:13:42.294 Latency(us) 00:13:42.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.294 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.294 Verification LBA range: start 0x0 length 0x8000 00:13:42.295 nvme0n1 : 5.94 125.19 7.82 0.00 0.00 987603.02 35746.91 983754.94 00:13:42.295 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x8000 length 0x8000 00:13:42.295 nvme0n1 : 5.96 104.71 6.54 0.00 0.00 1183442.67 183977.43 1670095.59 00:13:42.295 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x0 length 0x8000 00:13:42.295 nvme0n2 : 5.97 131.40 8.21 0.00 0.00 910373.59 24427.05 1174405.12 00:13:42.295 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x8000 length 0x8000 00:13:42.295 nvme0n2 : 5.97 150.00 9.38 0.00 0.00 799537.60 31218.97 815982.78 00:13:42.295 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x0 length 0x8000 00:13:42.295 nvme0n3 : 5.97 126.06 7.88 0.00 0.00 922112.44 170631.91 1273543.21 00:13:42.295 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x8000 length 0x8000 00:13:42.295 nvme0n3 : 5.96 118.08 7.38 0.00 0.00 988939.00 5451.40 1326925.27 00:13:42.295 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x0 length 0x2000 00:13:42.295 nvme1n1 : 5.98 72.19 4.51 0.00 0.00 1593472.26 22282.24 2516582.40 00:13:42.295 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x2000 length 0x2000 00:13:42.295 nvme1n1 : 5.96 104.62 6.54 0.00 0.00 1081794.66 129642.12 2211542.11 00:13:42.295 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x0 length 0xbd0b 00:13:42.295 nvme2n1 : 5.98 160.56 10.03 0.00 0.00 692541.38 9413.35 1105771.05 00:13:42.295 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:42.295 nvme2n1 : 5.98 171.34 10.71 0.00 0.00 651765.64 24903.68 510942.49 00:13:42.295 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0x0 length 0xa000 00:13:42.295 nvme3n1 : 5.97 133.98 8.37 0.00 0.00 808082.58 6821.70 1555705.48 00:13:42.295 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.295 Verification LBA range: start 0xa000 length 0xa000 00:13:42.295 nvme3n1 : 5.97 92.96 5.81 0.00 0.00 1157456.68 12988.04 3309687.16 00:13:42.295 =================================================================================================================== 00:13:42.295 Total : 1491.09 93.19 0.00 0.00 930161.53 5451.40 3309687.16 00:13:42.295 00:13:42.295 real 0m6.738s 00:13:42.295 user 0m12.250s 00:13:42.295 sys 0m0.518s 00:13:42.295 03:00:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:42.295 03:00:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.295 ************************************ 00:13:42.295 END TEST bdev_verify_big_io 00:13:42.295 ************************************ 00:13:42.295 03:00:27 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.295 03:00:27 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:42.295 03:00:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:42.295 03:00:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.295 ************************************ 00:13:42.295 START TEST bdev_write_zeroes 00:13:42.295 ************************************ 00:13:42.295 03:00:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.295 [2024-05-14 03:00:27.940377] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:42.295 [2024-05-14 03:00:27.940550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87072 ] 00:13:42.295 [2024-05-14 03:00:28.089250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.295 [2024-05-14 03:00:28.108243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.295 [2024-05-14 03:00:28.142937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.295 Running I/O for 1 seconds... 00:13:43.680 00:13:43.680 Latency(us) 00:13:43.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.680 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.680 nvme0n1 : 1.01 10593.99 41.38 0.00 0.00 12069.10 7298.33 17992.61 00:13:43.680 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.680 nvme0n2 : 1.02 10582.11 41.34 0.00 0.00 12074.73 7268.54 18469.24 00:13:43.680 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.680 nvme0n3 : 1.02 10570.53 41.29 0.00 0.00 12077.15 7357.91 19303.33 00:13:43.680 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.680 nvme1n1 : 1.02 10557.99 41.24 0.00 0.00 12080.42 7328.12 20256.58 00:13:43.680 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.680 nvme2n1 : 1.01 16514.98 64.51 0.00 0.00 7713.24 3961.95 15728.64 00:13:43.680 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.680 nvme3n1 : 1.02 10545.97 41.20 0.00 0.00 12006.24 7060.01 19779.96 00:13:43.680 =================================================================================================================== 00:13:43.680 Total : 69365.57 270.96 0.00 0.00 11029.23 3961.95 20256.58 00:13:43.680 00:13:43.680 real 0m1.695s 00:13:43.680 user 0m0.976s 00:13:43.680 sys 0m0.539s 00:13:43.680 03:00:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:43.680 03:00:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:43.681 ************************************ 00:13:43.681 END TEST bdev_write_zeroes 00:13:43.681 ************************************ 00:13:43.681 03:00:29 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.681 03:00:29 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:43.681 03:00:29 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:43.681 03:00:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.681 ************************************ 00:13:43.681 START TEST bdev_json_nonenclosed 00:13:43.681 ************************************ 00:13:43.681 03:00:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.681 [2024-05-14 03:00:29.700719] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:43.681 [2024-05-14 03:00:29.700871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87115 ] 00:13:43.939 [2024-05-14 03:00:29.841800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.939 [2024-05-14 03:00:29.866061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.939 [2024-05-14 03:00:29.910298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.939 [2024-05-14 03:00:29.910451] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:43.939 [2024-05-14 03:00:29.910484] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:43.939 [2024-05-14 03:00:29.910505] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:44.235 00:13:44.235 real 0m0.435s 00:13:44.235 user 0m0.215s 00:13:44.235 sys 0m0.116s 00:13:44.235 03:00:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.235 03:00:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:44.235 ************************************ 00:13:44.235 END TEST bdev_json_nonenclosed 00:13:44.235 ************************************ 00:13:44.235 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.235 03:00:30 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:44.235 03:00:30 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.235 03:00:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.235 ************************************ 00:13:44.235 START TEST bdev_json_nonarray 00:13:44.235 ************************************ 00:13:44.235 03:00:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.235 [2024-05-14 03:00:30.178512] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:44.235 [2024-05-14 03:00:30.178692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87135 ] 00:13:44.509 [2024-05-14 03:00:30.327404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:44.509 [2024-05-14 03:00:30.350658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.509 [2024-05-14 03:00:30.392207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.509 [2024-05-14 03:00:30.392372] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:44.509 [2024-05-14 03:00:30.392427] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:44.509 [2024-05-14 03:00:30.392461] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:44.509 ************************************ 00:13:44.509 END TEST bdev_json_nonarray 00:13:44.509 ************************************ 00:13:44.509 00:13:44.509 real 0m0.439s 00:13:44.509 user 0m0.197s 00:13:44.509 sys 0m0.137s 00:13:44.509 03:00:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.509 03:00:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:44.769 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:45.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:51.894 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:51.894 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:51.894 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:51.894 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:51.894 ************************************ 00:13:51.894 END TEST blockdev_xnvme 00:13:51.894 ************************************ 00:13:51.894 00:13:51.894 real 0m53.678s 00:13:51.894 user 1m28.811s 00:13:51.894 sys 0m29.790s 00:13:51.894 03:00:36 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.894 03:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.894 03:00:36 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:51.894 03:00:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:51.894 03:00:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.894 03:00:36 -- common/autotest_common.sh@10 -- # set +x 00:13:51.894 ************************************ 00:13:51.894 START TEST ublk 00:13:51.894 ************************************ 00:13:51.894 03:00:36 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:51.894 * Looking for test storage... 00:13:51.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:51.894 03:00:36 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:51.894 03:00:36 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:51.894 03:00:36 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:51.894 03:00:36 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:51.894 03:00:36 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:51.894 03:00:36 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:51.894 03:00:36 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:51.894 03:00:36 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:51.894 03:00:36 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:51.894 03:00:36 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:51.894 03:00:36 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.894 03:00:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.894 ************************************ 00:13:51.894 START TEST test_save_ublk_config 00:13:51.894 ************************************ 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=87415 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 87415 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 87415 ']' 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.894 03:00:36 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:51.894 [2024-05-14 03:00:37.113443] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:51.894 [2024-05-14 03:00:37.113613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87415 ] 00:13:51.894 [2024-05-14 03:00:37.255739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.894 [2024-05-14 03:00:37.273963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.894 [2024-05-14 03:00:37.324463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.153 [2024-05-14 03:00:38.048251] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:52.153 [2024-05-14 03:00:38.048563] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.153 malloc0 00:13:52.153 [2024-05-14 03:00:38.072420] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:52.153 [2024-05-14 03:00:38.072583] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:52.153 [2024-05-14 03:00:38.072617] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:52.153 [2024-05-14 03:00:38.072631] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.153 [2024-05-14 03:00:38.081351] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.153 [2024-05-14 03:00:38.081389] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:52.153 [2024-05-14 03:00:38.087272] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:52.153 [2024-05-14 03:00:38.087403] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:52.153 [2024-05-14 03:00:38.104193] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:52.153 0 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.153 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:52.411 "subsystems": [ 00:13:52.411 { 00:13:52.411 "subsystem": "keyring", 00:13:52.411 "config": [] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "iobuf", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "iobuf_set_options", 00:13:52.411 "params": { 00:13:52.411 "small_pool_count": 8192, 00:13:52.411 "large_pool_count": 1024, 00:13:52.411 "small_bufsize": 8192, 00:13:52.411 "large_bufsize": 135168 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "sock", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "sock_impl_set_options", 00:13:52.411 "params": { 00:13:52.411 "impl_name": "posix", 00:13:52.411 "recv_buf_size": 2097152, 00:13:52.411 "send_buf_size": 2097152, 00:13:52.411 "enable_recv_pipe": true, 00:13:52.411 "enable_quickack": false, 00:13:52.411 "enable_placement_id": 0, 00:13:52.411 "enable_zerocopy_send_server": true, 00:13:52.411 "enable_zerocopy_send_client": false, 00:13:52.411 "zerocopy_threshold": 0, 00:13:52.411 "tls_version": 0, 00:13:52.411 "enable_ktls": false 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "sock_impl_set_options", 00:13:52.411 "params": { 00:13:52.411 "impl_name": "ssl", 00:13:52.411 "recv_buf_size": 4096, 00:13:52.411 "send_buf_size": 4096, 00:13:52.411 "enable_recv_pipe": true, 00:13:52.411 "enable_quickack": false, 00:13:52.411 "enable_placement_id": 0, 00:13:52.411 "enable_zerocopy_send_server": true, 00:13:52.411 "enable_zerocopy_send_client": false, 00:13:52.411 "zerocopy_threshold": 0, 00:13:52.411 "tls_version": 0, 00:13:52.411 "enable_ktls": false 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "vmd", 00:13:52.411 "config": [] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "accel", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "accel_set_options", 00:13:52.411 "params": { 00:13:52.411 "small_cache_size": 128, 00:13:52.411 "large_cache_size": 16, 00:13:52.411 "task_count": 2048, 00:13:52.411 "sequence_count": 2048, 00:13:52.411 "buf_count": 2048 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "bdev", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "bdev_set_options", 00:13:52.411 "params": { 00:13:52.411 "bdev_io_pool_size": 65535, 00:13:52.411 "bdev_io_cache_size": 256, 00:13:52.411 "bdev_auto_examine": true, 00:13:52.411 "iobuf_small_cache_size": 128, 00:13:52.411 "iobuf_large_cache_size": 16 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "bdev_raid_set_options", 00:13:52.411 "params": { 00:13:52.411 "process_window_size_kb": 1024 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "bdev_iscsi_set_options", 00:13:52.411 "params": { 00:13:52.411 "timeout_sec": 30 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "bdev_nvme_set_options", 00:13:52.411 "params": { 00:13:52.411 "action_on_timeout": "none", 00:13:52.411 "timeout_us": 0, 00:13:52.411 "timeout_admin_us": 0, 00:13:52.411 "keep_alive_timeout_ms": 10000, 00:13:52.411 "arbitration_burst": 0, 00:13:52.411 "low_priority_weight": 0, 00:13:52.411 "medium_priority_weight": 0, 00:13:52.411 "high_priority_weight": 0, 00:13:52.411 "nvme_adminq_poll_period_us": 10000, 00:13:52.411 "nvme_ioq_poll_period_us": 0, 00:13:52.411 "io_queue_requests": 0, 00:13:52.411 "delay_cmd_submit": true, 00:13:52.411 "transport_retry_count": 4, 00:13:52.411 "bdev_retry_count": 3, 00:13:52.411 "transport_ack_timeout": 0, 00:13:52.411 "ctrlr_loss_timeout_sec": 0, 00:13:52.411 "reconnect_delay_sec": 0, 00:13:52.411 "fast_io_fail_timeout_sec": 0, 00:13:52.411 "disable_auto_failback": false, 00:13:52.411 "generate_uuids": false, 00:13:52.411 "transport_tos": 0, 00:13:52.411 "nvme_error_stat": false, 00:13:52.411 "rdma_srq_size": 0, 00:13:52.411 "io_path_stat": false, 00:13:52.411 "allow_accel_sequence": false, 00:13:52.411 "rdma_max_cq_size": 0, 00:13:52.411 "rdma_cm_event_timeout_ms": 0, 00:13:52.411 "dhchap_digests": [ 00:13:52.411 "sha256", 00:13:52.411 "sha384", 00:13:52.411 "sha512" 00:13:52.411 ], 00:13:52.411 "dhchap_dhgroups": [ 00:13:52.411 "null", 00:13:52.411 "ffdhe2048", 00:13:52.411 "ffdhe3072", 00:13:52.411 "ffdhe4096", 00:13:52.411 "ffdhe6144", 00:13:52.411 "ffdhe8192" 00:13:52.411 ] 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "bdev_nvme_set_hotplug", 00:13:52.411 "params": { 00:13:52.411 "period_us": 100000, 00:13:52.411 "enable": false 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "bdev_malloc_create", 00:13:52.411 "params": { 00:13:52.411 "name": "malloc0", 00:13:52.411 "num_blocks": 8192, 00:13:52.411 "block_size": 4096, 00:13:52.411 "physical_block_size": 4096, 00:13:52.411 "uuid": "5844cd0a-22ec-469d-9b3b-87593d060158", 00:13:52.411 "optimal_io_boundary": 0 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "bdev_wait_for_examine" 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "scsi", 00:13:52.411 "config": null 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "scheduler", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "framework_set_scheduler", 00:13:52.411 "params": { 00:13:52.411 "name": "static" 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "vhost_scsi", 00:13:52.411 "config": [] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "vhost_blk", 00:13:52.411 "config": [] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "ublk", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "ublk_create_target", 00:13:52.411 "params": { 00:13:52.411 "cpumask": "1" 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "ublk_start_disk", 00:13:52.411 "params": { 00:13:52.411 "bdev_name": "malloc0", 00:13:52.411 "ublk_id": 0, 00:13:52.411 "num_queues": 1, 00:13:52.411 "queue_depth": 128 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "nbd", 00:13:52.411 "config": [] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "nvmf", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "nvmf_set_config", 00:13:52.411 "params": { 00:13:52.411 "discovery_filter": "match_any", 00:13:52.411 "admin_cmd_passthru": { 00:13:52.411 "identify_ctrlr": false 00:13:52.411 } 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "nvmf_set_max_subsystems", 00:13:52.411 "params": { 00:13:52.411 "max_subsystems": 1024 00:13:52.411 } 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "method": "nvmf_set_crdt", 00:13:52.411 "params": { 00:13:52.411 "crdt1": 0, 00:13:52.411 "crdt2": 0, 00:13:52.411 "crdt3": 0 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }, 00:13:52.411 { 00:13:52.411 "subsystem": "iscsi", 00:13:52.411 "config": [ 00:13:52.411 { 00:13:52.411 "method": "iscsi_set_options", 00:13:52.411 "params": { 00:13:52.411 "node_base": "iqn.2016-06.io.spdk", 00:13:52.411 "max_sessions": 128, 00:13:52.411 "max_connections_per_session": 2, 00:13:52.411 "max_queue_depth": 64, 00:13:52.411 "default_time2wait": 2, 00:13:52.411 "default_time2retain": 20, 00:13:52.411 "first_burst_length": 8192, 00:13:52.411 "immediate_data": true, 00:13:52.411 "allow_duplicated_isid": false, 00:13:52.411 "error_recovery_level": 0, 00:13:52.411 "nop_timeout": 60, 00:13:52.411 "nop_in_interval": 30, 00:13:52.411 "disable_chap": false, 00:13:52.411 "require_chap": false, 00:13:52.411 "mutual_chap": false, 00:13:52.411 "chap_group": 0, 00:13:52.411 "max_large_datain_per_connection": 64, 00:13:52.411 "max_r2t_per_connection": 4, 00:13:52.411 "pdu_pool_size": 36864, 00:13:52.411 "immediate_data_pool_size": 16384, 00:13:52.411 "data_out_pool_size": 2048 00:13:52.411 } 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 } 00:13:52.411 ] 00:13:52.411 }' 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 87415 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 87415 ']' 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 87415 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87415 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:52.411 killing process with pid 87415 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87415' 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 87415 00:13:52.411 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 87415 00:13:52.670 [2024-05-14 03:00:38.620410] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:52.670 [2024-05-14 03:00:38.664236] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:52.670 [2024-05-14 03:00:38.664432] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:52.670 [2024-05-14 03:00:38.666447] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:52.670 [2024-05-14 03:00:38.666512] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:52.670 [2024-05-14 03:00:38.666534] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:52.670 [2024-05-14 03:00:38.666588] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:52.670 [2024-05-14 03:00:38.666764] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=87452 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 87452 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 87452 ']' 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:52.929 03:00:38 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:52.929 "subsystems": [ 00:13:52.929 { 00:13:52.929 "subsystem": "keyring", 00:13:52.929 "config": [] 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "subsystem": "iobuf", 00:13:52.929 "config": [ 00:13:52.929 { 00:13:52.929 "method": "iobuf_set_options", 00:13:52.929 "params": { 00:13:52.929 "small_pool_count": 8192, 00:13:52.929 "large_pool_count": 1024, 00:13:52.929 "small_bufsize": 8192, 00:13:52.929 "large_bufsize": 135168 00:13:52.929 } 00:13:52.929 } 00:13:52.929 ] 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "subsystem": "sock", 00:13:52.929 "config": [ 00:13:52.929 { 00:13:52.929 "method": "sock_impl_set_options", 00:13:52.929 "params": { 00:13:52.929 "impl_name": "posix", 00:13:52.929 "recv_buf_size": 2097152, 00:13:52.929 "send_buf_size": 2097152, 00:13:52.929 "enable_recv_pipe": true, 00:13:52.929 "enable_quickack": false, 00:13:52.929 "enable_placement_id": 0, 00:13:52.929 "enable_zerocopy_send_server": true, 00:13:52.929 "enable_zerocopy_send_client": false, 00:13:52.929 "zerocopy_threshold": 0, 00:13:52.929 "tls_version": 0, 00:13:52.929 "enable_ktls": false 00:13:52.929 } 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "method": "sock_impl_set_options", 00:13:52.929 "params": { 00:13:52.929 "impl_name": "ssl", 00:13:52.929 "recv_buf_size": 4096, 00:13:52.929 "send_buf_size": 4096, 00:13:52.929 "enable_recv_pipe": true, 00:13:52.929 "enable_quickack": false, 00:13:52.929 "enable_placement_id": 0, 00:13:52.929 "enable_zerocopy_send_server": true, 00:13:52.929 "enable_zerocopy_send_client": false, 00:13:52.929 "zerocopy_threshold": 0, 00:13:52.929 "tls_version": 0, 00:13:52.929 "enable_ktls": false 00:13:52.929 } 00:13:52.929 } 00:13:52.929 ] 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "subsystem": "vmd", 00:13:52.929 "config": [] 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "subsystem": "accel", 00:13:52.929 "config": [ 00:13:52.929 { 00:13:52.929 "method": "accel_set_options", 00:13:52.929 "params": { 00:13:52.929 "small_cache_size": 128, 00:13:52.929 "large_cache_size": 16, 00:13:52.929 "task_count": 2048, 00:13:52.929 "sequence_count": 2048, 00:13:52.929 "buf_count": 2048 00:13:52.929 } 00:13:52.929 } 00:13:52.929 ] 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "subsystem": "bdev", 00:13:52.929 "config": [ 00:13:52.929 { 00:13:52.929 "method": "bdev_set_options", 00:13:52.929 "params": { 00:13:52.929 "bdev_io_pool_size": 65535, 00:13:52.929 "bdev_io_cache_size": 256, 00:13:52.929 "bdev_auto_examine": true, 00:13:52.929 "iobuf_small_cache_size": 128, 00:13:52.929 "iobuf_large_cache_size": 16 00:13:52.929 } 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "method": "bdev_raid_set_options", 00:13:52.929 "params": { 00:13:52.929 "process_window_size_kb": 1024 00:13:52.929 } 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "method": "bdev_iscsi_set_options", 00:13:52.929 "params": { 00:13:52.929 "timeout_sec": 30 00:13:52.929 } 00:13:52.929 }, 00:13:52.929 { 00:13:52.929 "method": "bdev_nvme_set_options", 00:13:52.929 "params": { 00:13:52.929 "action_on_timeout": "none", 00:13:52.929 "timeout_us": 0, 00:13:52.929 "timeout_admin_us": 0, 00:13:52.929 "keep_alive_timeout_ms": 10000, 00:13:52.929 "arbitration_burst": 0, 00:13:52.929 "low_priority_weight": 0, 00:13:52.929 "medium_priority_weight": 0, 00:13:52.929 "high_priority_weight": 0, 00:13:52.929 "nvme_adminq_poll_period_us": 10000, 00:13:52.929 "nvme_ioq_poll_period_us": 0, 00:13:52.929 "io_queue_requests": 0, 00:13:52.929 "delay_cmd_submit": true, 00:13:52.929 "transport_retry_count": 4, 00:13:52.929 "bdev_retry_count": 3, 00:13:52.929 "transport_ack_timeout": 0, 00:13:52.929 "ctrlr_loss_timeout_sec": 0, 00:13:52.929 "reconnect_delay_sec": 0, 00:13:52.929 "fast_io_fail_timeout_sec": 0, 00:13:52.929 "disable_auto_failback": false, 00:13:52.929 "generate_uuids": false, 00:13:52.930 "transport_tos": 0, 00:13:52.930 "nvme_error_stat": false, 00:13:52.930 "rdma_srq_size": 0, 00:13:52.930 "io_path_stat": false, 00:13:52.930 "allow_accel_sequence": false, 00:13:52.930 "rdma_max_cq_size": 0, 00:13:52.930 "rdma_cm_event_timeout_ms": 0, 00:13:52.930 "dhchap_digests": [ 00:13:52.930 "sha256", 00:13:52.930 "sha384", 00:13:52.930 "sha512" 00:13:52.930 ], 00:13:52.930 "dhchap_dhgroups": [ 00:13:52.930 "null", 00:13:52.930 "ffdhe2048", 00:13:52.930 "ffdhe3072", 00:13:52.930 "ffdhe4096", 00:13:52.930 "ffdhe6144", 00:13:52.930 "ffdhe8192" 00:13:52.930 ] 00:13:52.930 } 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "method": "bdev_nvme_set_hotplug", 00:13:52.930 "params": { 00:13:52.930 "period_us": 100000, 00:13:52.930 "enable": false 00:13:52.930 } 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "method": "bdev_malloc_create", 00:13:52.930 "params": { 00:13:52.930 "name": "malloc0", 00:13:52.930 "num_blocks": 8192, 00:13:52.930 "block_size": 4096, 00:13:52.930 "physical_block_size": 4096, 00:13:52.930 "uuid": "5844cd0a-22ec-469d-9b3b-87593d060158", 00:13:52.930 "optimal_io_boundary": 0 00:13:52.930 } 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "method": "bdev_wait_for_examine" 00:13:52.930 } 00:13:52.930 ] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "scsi", 00:13:52.930 "config": null 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "scheduler", 00:13:52.930 "config": [ 00:13:52.930 { 00:13:52.930 "method": "framework_set_scheduler", 00:13:52.930 "params": { 00:13:52.930 "name": "static" 00:13:52.930 } 00:13:52.930 } 00:13:52.930 ] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "vhost_scsi", 00:13:52.930 "config": [] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "vhost_blk", 00:13:52.930 "config": [] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "ublk", 00:13:52.930 "config": [ 00:13:52.930 { 00:13:52.930 "method": "ublk_create_target", 00:13:52.930 "params": { 00:13:52.930 "cpumask": "1" 00:13:52.930 } 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "method": "ublk_start_disk", 00:13:52.930 "params": { 00:13:52.930 "bdev_name": "malloc0", 00:13:52.930 "ublk_id": 0, 00:13:52.930 "num_queues": 1, 00:13:52.930 "queue_depth": 128 00:13:52.930 } 00:13:52.930 } 00:13:52.930 ] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "nbd", 00:13:52.930 "config": [] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "nvmf", 00:13:52.930 "config": [ 00:13:52.930 { 00:13:52.930 "method": "nvmf_set_config", 00:13:52.930 "params": { 00:13:52.930 "discovery_filter": "match_any", 00:13:52.930 "admin_cmd_passthru": { 00:13:52.930 "identify_ctrlr": false 00:13:52.930 } 00:13:52.930 } 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "method": "nvmf_set_max_subsystems", 00:13:52.930 "params": { 00:13:52.930 "max_subsystems": 1024 00:13:52.930 } 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "method": "nvmf_set_crdt", 00:13:52.930 "params": { 00:13:52.930 "crdt1": 0, 00:13:52.930 "crdt2": 0, 00:13:52.930 "crdt3": 0 00:13:52.930 } 00:13:52.930 } 00:13:52.930 ] 00:13:52.930 }, 00:13:52.930 { 00:13:52.930 "subsystem": "iscsi", 00:13:52.930 "config": [ 00:13:52.930 { 00:13:52.930 "method": "iscsi_set_options", 00:13:52.930 "params": { 00:13:52.930 "node_base": "iqn.2016-06.io.spdk", 00:13:52.930 "max_sessions": 128, 00:13:52.930 "max_connections_per_session": 2, 00:13:52.930 "max_queue_depth": 64, 00:13:52.930 "default_time2wait": 2, 00:13:52.930 "default_time2retain": 20, 00:13:52.930 "first_burst_length": 8192, 00:13:52.930 "immediate_data": true, 00:13:52.930 "allow_duplicated_isid": false, 00:13:52.930 "error_recovery_level": 0, 00:13:52.930 "nop_timeout": 60, 00:13:52.930 "nop_in_interval": 30, 00:13:52.930 "disable_chap": false, 00:13:52.930 "require_chap": false, 00:13:52.930 "mutual_chap": false, 00:13:52.930 "chap_group": 0, 00:13:52.930 "max_large_datain_per_connection": 64, 00:13:52.930 "max_r2t_per_connection": 4, 00:13:52.930 "pdu_pool_size": 36864, 00:13:52.930 "immediate_data_pool_size": 16384, 00:13:52.930 "data_out_pool_size": 2048 00:13:52.930 } 00:13:52.930 } 00:13:52.930 ] 00:13:52.930 } 00:13:52.930 ] 00:13:52.930 }' 00:13:53.189 [2024-05-14 03:00:38.998804] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:53.189 [2024-05-14 03:00:38.998992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87452 ] 00:13:53.189 [2024-05-14 03:00:39.138420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:53.189 [2024-05-14 03:00:39.157296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.189 [2024-05-14 03:00:39.205434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.756 [2024-05-14 03:00:39.485227] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:53.756 [2024-05-14 03:00:39.485553] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:53.756 [2024-05-14 03:00:39.492379] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:53.756 [2024-05-14 03:00:39.492467] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:53.756 [2024-05-14 03:00:39.492484] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:53.756 [2024-05-14 03:00:39.492493] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.756 [2024-05-14 03:00:39.501322] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.756 [2024-05-14 03:00:39.501350] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.756 [2024-05-14 03:00:39.511210] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.756 [2024-05-14 03:00:39.511357] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.756 [2024-05-14 03:00:39.527239] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 87452 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 87452 ']' 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 87452 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.014 03:00:39 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87452 00:13:54.014 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:54.014 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:54.014 killing process with pid 87452 00:13:54.014 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87452' 00:13:54.014 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 87452 00:13:54.014 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 87452 00:13:54.271 [2024-05-14 03:00:40.207890] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.271 [2024-05-14 03:00:40.240275] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.271 [2024-05-14 03:00:40.240521] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.271 [2024-05-14 03:00:40.244161] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.271 [2024-05-14 03:00:40.244225] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:54.271 [2024-05-14 03:00:40.244237] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:54.271 [2024-05-14 03:00:40.244277] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:54.271 [2024-05-14 03:00:40.244463] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:54.529 03:00:40 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:54.529 00:13:54.529 real 0m3.462s 00:13:54.529 user 0m2.949s 00:13:54.529 sys 0m1.419s 00:13:54.529 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.529 03:00:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:54.529 ************************************ 00:13:54.529 END TEST test_save_ublk_config 00:13:54.529 ************************************ 00:13:54.529 03:00:40 ublk -- ublk/ublk.sh@139 -- # spdk_pid=87498 00:13:54.529 03:00:40 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.529 03:00:40 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:54.529 03:00:40 ublk -- ublk/ublk.sh@141 -- # waitforlisten 87498 00:13:54.529 03:00:40 ublk -- common/autotest_common.sh@827 -- # '[' -z 87498 ']' 00:13:54.529 03:00:40 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.529 03:00:40 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.529 03:00:40 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.529 03:00:40 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.529 03:00:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.787 [2024-05-14 03:00:40.599919] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:13:54.787 [2024-05-14 03:00:40.600143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87498 ] 00:13:54.787 [2024-05-14 03:00:40.749069] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:54.787 [2024-05-14 03:00:40.768409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.787 [2024-05-14 03:00:40.805630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.787 [2024-05-14 03:00:40.805680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.722 03:00:41 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.722 03:00:41 ublk -- common/autotest_common.sh@860 -- # return 0 00:13:55.722 03:00:41 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:55.722 03:00:41 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:55.722 03:00:41 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.722 03:00:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 ************************************ 00:13:55.722 START TEST test_create_ublk 00:13:55.722 ************************************ 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 [2024-05-14 03:00:41.532228] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:55.722 [2024-05-14 03:00:41.533427] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 [2024-05-14 03:00:41.593417] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:55.722 [2024-05-14 03:00:41.593952] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:55.722 [2024-05-14 03:00:41.593981] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:55.722 [2024-05-14 03:00:41.593995] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.722 [2024-05-14 03:00:41.601182] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.722 [2024-05-14 03:00:41.601219] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.722 [2024-05-14 03:00:41.608194] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.722 [2024-05-14 03:00:41.614299] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:55.722 [2024-05-14 03:00:41.634224] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 03:00:41 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:55.722 { 00:13:55.722 "ublk_device": "/dev/ublkb0", 00:13:55.722 "id": 0, 00:13:55.722 "queue_depth": 512, 00:13:55.722 "num_queues": 4, 00:13:55.722 "bdev_name": "Malloc0" 00:13:55.722 } 00:13:55.722 ]' 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:55.722 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:55.981 03:00:41 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:55.981 03:00:41 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:55.981 fio: verification read phase will never start because write phase uses all of runtime 00:13:55.981 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:55.981 fio-3.35 00:13:55.981 Starting 1 process 00:14:08.172 00:14:08.172 fio_test: (groupid=0, jobs=1): err= 0: pid=87543: Tue May 14 03:00:52 2024 00:14:08.172 write: IOPS=11.9k, BW=46.7MiB/s (48.9MB/s)(467MiB/10001msec); 0 zone resets 00:14:08.172 clat (usec): min=53, max=4095, avg=82.39, stdev=124.85 00:14:08.172 lat (usec): min=54, max=4096, avg=83.12, stdev=124.85 00:14:08.172 clat percentiles (usec): 00:14:08.172 | 1.00th=[ 59], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 71], 00:14:08.172 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 75], 00:14:08.172 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 96], 00:14:08.172 | 99.00th=[ 110], 99.50th=[ 118], 99.90th=[ 2638], 99.95th=[ 3097], 00:14:08.172 | 99.99th=[ 3720] 00:14:08.172 bw ( KiB/s): min=47272, max=51632, per=100.00%, avg=47828.21, stdev=937.02, samples=19 00:14:08.172 iops : min=11818, max=12908, avg=11957.05, stdev=234.25, samples=19 00:14:08.172 lat (usec) : 100=96.87%, 250=2.81%, 500=0.01%, 750=0.03%, 1000=0.01% 00:14:08.172 lat (msec) : 2=0.11%, 4=0.16%, 10=0.01% 00:14:08.172 cpu : usr=2.77%, sys=8.16%, ctx=119446, majf=0, minf=795 00:14:08.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.172 issued rwts: total=0,119445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.172 00:14:08.172 Run status group 0 (all jobs): 00:14:08.172 WRITE: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=467MiB (489MB), run=10001-10001msec 00:14:08.173 00:14:08.173 Disk stats (read/write): 00:14:08.173 ublkb0: ios=0/118232, merge=0/0, ticks=0/8886, in_queue=8887, util=99.10% 00:14:08.173 03:00:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 [2024-05-14 03:00:52.137952] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.173 [2024-05-14 03:00:52.169745] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.173 [2024-05-14 03:00:52.174569] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.173 [2024-05-14 03:00:52.182308] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.173 [2024-05-14 03:00:52.182652] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:08.173 [2024-05-14 03:00:52.182678] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 [2024-05-14 03:00:52.202339] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:08.173 request: 00:14:08.173 { 00:14:08.173 "ublk_id": 0, 00:14:08.173 "method": "ublk_stop_disk", 00:14:08.173 "req_id": 1 00:14:08.173 } 00:14:08.173 Got JSON-RPC error response 00:14:08.173 response: 00:14:08.173 { 00:14:08.173 "code": -19, 00:14:08.173 "message": "No such device" 00:14:08.173 } 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:08.173 03:00:52 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 [2024-05-14 03:00:52.217314] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.173 [2024-05-14 03:00:52.219258] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:08.173 [2024-05-14 03:00:52.219304] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:08.173 03:00:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:08.173 00:14:08.173 real 0m10.884s 00:14:08.173 user 0m0.718s 00:14:08.173 sys 0m0.914s 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 ************************************ 00:14:08.173 END TEST test_create_ublk 00:14:08.173 ************************************ 00:14:08.173 03:00:52 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:08.173 03:00:52 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:08.173 03:00:52 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.173 03:00:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 ************************************ 00:14:08.173 START TEST test_create_multi_ublk 00:14:08.173 ************************************ 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 [2024-05-14 03:00:52.456161] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.173 [2024-05-14 03:00:52.457236] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 [2024-05-14 03:00:52.522417] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:08.173 [2024-05-14 03:00:52.522943] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:08.173 [2024-05-14 03:00:52.522970] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:08.173 [2024-05-14 03:00:52.522982] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.173 [2024-05-14 03:00:52.537176] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.173 [2024-05-14 03:00:52.537203] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.173 [2024-05-14 03:00:52.544237] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.173 [2024-05-14 03:00:52.544962] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:08.173 [2024-05-14 03:00:52.555278] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 [2024-05-14 03:00:52.624393] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:08.173 [2024-05-14 03:00:52.624926] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:08.173 [2024-05-14 03:00:52.624949] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:08.173 [2024-05-14 03:00:52.624970] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.173 [2024-05-14 03:00:52.632175] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.173 [2024-05-14 03:00:52.632204] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.173 [2024-05-14 03:00:52.639223] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.173 [2024-05-14 03:00:52.639981] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:08.173 [2024-05-14 03:00:52.661158] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:08.173 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 [2024-05-14 03:00:52.723370] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:08.174 [2024-05-14 03:00:52.723895] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:08.174 [2024-05-14 03:00:52.723922] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:08.174 [2024-05-14 03:00:52.723933] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.174 [2024-05-14 03:00:52.730172] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.174 [2024-05-14 03:00:52.730199] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.174 [2024-05-14 03:00:52.740238] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.174 [2024-05-14 03:00:52.740969] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:08.174 [2024-05-14 03:00:52.749208] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 [2024-05-14 03:00:52.817381] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:08.174 [2024-05-14 03:00:52.817898] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:08.174 [2024-05-14 03:00:52.817921] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:08.174 [2024-05-14 03:00:52.817935] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.174 [2024-05-14 03:00:52.821648] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.174 [2024-05-14 03:00:52.821698] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.174 [2024-05-14 03:00:52.832197] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.174 [2024-05-14 03:00:52.832931] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:08.174 [2024-05-14 03:00:52.836947] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:08.174 { 00:14:08.174 "ublk_device": "/dev/ublkb0", 00:14:08.174 "id": 0, 00:14:08.174 "queue_depth": 512, 00:14:08.174 "num_queues": 4, 00:14:08.174 "bdev_name": "Malloc0" 00:14:08.174 }, 00:14:08.174 { 00:14:08.174 "ublk_device": "/dev/ublkb1", 00:14:08.174 "id": 1, 00:14:08.174 "queue_depth": 512, 00:14:08.174 "num_queues": 4, 00:14:08.174 "bdev_name": "Malloc1" 00:14:08.174 }, 00:14:08.174 { 00:14:08.174 "ublk_device": "/dev/ublkb2", 00:14:08.174 "id": 2, 00:14:08.174 "queue_depth": 512, 00:14:08.174 "num_queues": 4, 00:14:08.174 "bdev_name": "Malloc2" 00:14:08.174 }, 00:14:08.174 { 00:14:08.174 "ublk_device": "/dev/ublkb3", 00:14:08.174 "id": 3, 00:14:08.174 "queue_depth": 512, 00:14:08.174 "num_queues": 4, 00:14:08.174 "bdev_name": "Malloc3" 00:14:08.174 } 00:14:08.174 ]' 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:08.174 03:00:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 [2024-05-14 03:00:53.851350] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.174 [2024-05-14 03:00:53.892201] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.174 [2024-05-14 03:00:53.897464] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.174 [2024-05-14 03:00:53.906170] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.174 [2024-05-14 03:00:53.906531] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:08.174 [2024-05-14 03:00:53.906562] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 [2024-05-14 03:00:53.914292] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.174 [2024-05-14 03:00:53.959256] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.174 [2024-05-14 03:00:53.960530] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.174 [2024-05-14 03:00:53.967182] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.174 [2024-05-14 03:00:53.967535] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:08.174 [2024-05-14 03:00:53.967557] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.174 03:00:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.174 [2024-05-14 03:00:53.982373] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.174 [2024-05-14 03:00:54.028641] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.174 [2024-05-14 03:00:54.030231] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.174 [2024-05-14 03:00:54.036171] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.175 [2024-05-14 03:00:54.036516] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:08.175 [2024-05-14 03:00:54.036538] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.175 [2024-05-14 03:00:54.050337] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.175 [2024-05-14 03:00:54.090582] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.175 [2024-05-14 03:00:54.092037] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.175 [2024-05-14 03:00:54.098272] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.175 [2024-05-14 03:00:54.098602] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:08.175 [2024-05-14 03:00:54.098624] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.175 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:08.434 [2024-05-14 03:00:54.364400] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.434 [2024-05-14 03:00:54.366261] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:08.434 [2024-05-14 03:00:54.366331] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.434 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:08.693 00:14:08.693 real 0m2.209s 00:14:08.693 user 0m1.252s 00:14:08.693 sys 0m0.156s 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.693 03:00:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.693 ************************************ 00:14:08.693 END TEST test_create_multi_ublk 00:14:08.693 ************************************ 00:14:08.693 03:00:54 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:08.693 03:00:54 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:08.693 03:00:54 ublk -- ublk/ublk.sh@130 -- # killprocess 87498 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@946 -- # '[' -z 87498 ']' 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@950 -- # kill -0 87498 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@951 -- # uname 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87498 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:08.693 killing process with pid 87498 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87498' 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@965 -- # kill 87498 00:14:08.693 03:00:54 ublk -- common/autotest_common.sh@970 -- # wait 87498 00:14:08.951 [2024-05-14 03:00:54.809963] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.951 [2024-05-14 03:00:54.810042] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:09.209 00:14:09.209 real 0m18.131s 00:14:09.209 user 0m28.988s 00:14:09.209 sys 0m7.493s 00:14:09.209 03:00:55 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.209 ************************************ 00:14:09.209 END TEST ublk 00:14:09.209 ************************************ 00:14:09.209 03:00:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.209 03:00:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:09.209 03:00:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:09.210 03:00:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.210 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:14:09.210 ************************************ 00:14:09.210 START TEST ublk_recovery 00:14:09.210 ************************************ 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:09.210 * Looking for test storage... 00:14:09.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:09.210 03:00:55 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:09.210 03:00:55 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:09.210 03:00:55 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:09.210 03:00:55 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87844 00:14:09.210 03:00:55 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.210 03:00:55 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.210 03:00:55 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87844 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87844 ']' 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.210 03:00:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.469 [2024-05-14 03:00:55.281452] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:14:09.469 [2024-05-14 03:00:55.281695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87844 ] 00:14:09.469 [2024-05-14 03:00:55.433071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.469 [2024-05-14 03:00:55.452720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.469 [2024-05-14 03:00:55.492258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.469 [2024-05-14 03:00:55.492307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:10.408 03:00:56 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.408 [2024-05-14 03:00:56.221219] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.408 [2024-05-14 03:00:56.222408] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.408 03:00:56 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.408 malloc0 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.408 03:00:56 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.408 [2024-05-14 03:00:56.255314] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:10.408 [2024-05-14 03:00:56.255484] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:10.408 [2024-05-14 03:00:56.255503] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:10.408 [2024-05-14 03:00:56.255516] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.408 [2024-05-14 03:00:56.263199] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.408 [2024-05-14 03:00:56.263237] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.408 [2024-05-14 03:00:56.270194] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.408 [2024-05-14 03:00:56.270394] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:10.408 [2024-05-14 03:00:56.300169] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.408 1 00:14:10.408 03:00:56 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.408 03:00:56 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:11.346 03:00:57 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87877 00:14:11.346 03:00:57 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:11.346 03:00:57 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:11.604 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.604 fio-3.35 00:14:11.604 Starting 1 process 00:14:16.870 03:01:02 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87844 00:14:16.870 03:01:02 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:22.161 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87844 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:22.161 03:01:07 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87988 00:14:22.161 03:01:07 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.161 03:01:07 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87988 00:14:22.161 03:01:07 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87988 ']' 00:14:22.161 03:01:07 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:22.161 03:01:07 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.161 03:01:07 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:22.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.161 03:01:07 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.161 03:01:07 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:22.161 03:01:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.161 [2024-05-14 03:01:07.433242] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:14:22.161 [2024-05-14 03:01:07.433442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87988 ] 00:14:22.161 [2024-05-14 03:01:07.588532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:22.161 [2024-05-14 03:01:07.603853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:22.161 [2024-05-14 03:01:07.644852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.161 [2024-05-14 03:01:07.644877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.420 03:01:08 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.420 03:01:08 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:22.420 03:01:08 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:22.420 03:01:08 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.420 03:01:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.420 [2024-05-14 03:01:08.377187] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:22.420 [2024-05-14 03:01:08.378408] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.421 03:01:08 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.421 malloc0 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.421 03:01:08 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.421 [2024-05-14 03:01:08.412362] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:22.421 [2024-05-14 03:01:08.412418] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:22.421 [2024-05-14 03:01:08.412436] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:22.421 [2024-05-14 03:01:08.419286] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:22.421 [2024-05-14 03:01:08.419318] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:22.421 1 00:14:22.421 [2024-05-14 03:01:08.419442] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:22.421 03:01:08 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.421 03:01:08 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87877 00:14:22.421 [2024-05-14 03:01:08.427262] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:22.421 [2024-05-14 03:01:08.433669] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:22.421 [2024-05-14 03:01:08.439533] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:22.421 [2024-05-14 03:01:08.439567] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:18.635 00:15:18.635 fio_test: (groupid=0, jobs=1): err= 0: pid=87880: Tue May 14 03:01:57 2024 00:15:18.635 read: IOPS=18.3k, BW=71.6MiB/s (75.1MB/s)(4295MiB/60003msec) 00:15:18.635 slat (usec): min=2, max=3375, avg= 6.29, stdev= 5.03 00:15:18.635 clat (usec): min=757, max=6137.1k, avg=3439.67, stdev=47149.58 00:15:18.635 lat (usec): min=887, max=6137.1k, avg=3445.96, stdev=47149.58 00:15:18.635 clat percentiles (usec): 00:15:18.635 | 1.00th=[ 2507], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:15:18.635 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:15:18.635 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 4146], 00:15:18.635 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 8455], 99.95th=[ 9503], 00:15:18.635 | 99.99th=[13304] 00:15:18.635 bw ( KiB/s): min= 7160, max=89360, per=100.00%, avg=80799.23, stdev=10528.42, samples=108 00:15:18.635 iops : min= 1790, max=22340, avg=20199.80, stdev=2632.11, samples=108 00:15:18.635 write: IOPS=18.3k, BW=71.6MiB/s (75.0MB/s)(4294MiB/60003msec); 0 zone resets 00:15:18.635 slat (usec): min=2, max=554, avg= 6.34, stdev= 3.27 00:15:18.635 clat (usec): min=772, max=6137.3k, avg=3532.32, stdev=46425.95 00:15:18.635 lat (usec): min=777, max=6137.3k, avg=3538.67, stdev=46425.96 00:15:18.635 clat percentiles (usec): 00:15:18.635 | 1.00th=[ 2573], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2966], 00:15:18.635 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:15:18.635 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3326], 95.00th=[ 4047], 00:15:18.635 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[ 8586], 99.95th=[ 9634], 00:15:18.635 | 99.99th=[13435] 00:15:18.635 bw ( KiB/s): min= 7080, max=89040, per=100.00%, avg=80760.47, stdev=10497.09, samples=108 00:15:18.635 iops : min= 1770, max=22260, avg=20190.10, stdev=2624.27, samples=108 00:15:18.635 lat (usec) : 1000=0.01% 00:15:18.635 lat (msec) : 2=0.07%, 4=94.60%, 10=5.28%, 20=0.04%, >=2000=0.01% 00:15:18.635 cpu : usr=9.63%, sys=21.84%, ctx=69398, majf=0, minf=13 00:15:18.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:18.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.635 issued rwts: total=1099554,1099178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.635 00:15:18.635 Run status group 0 (all jobs): 00:15:18.635 READ: bw=71.6MiB/s (75.1MB/s), 71.6MiB/s-71.6MiB/s (75.1MB/s-75.1MB/s), io=4295MiB (4504MB), run=60003-60003msec 00:15:18.635 WRITE: bw=71.6MiB/s (75.0MB/s), 71.6MiB/s-71.6MiB/s (75.0MB/s-75.0MB/s), io=4294MiB (4502MB), run=60003-60003msec 00:15:18.635 00:15:18.635 Disk stats (read/write): 00:15:18.635 ublkb1: ios=1097310/1096854, merge=0/0, ticks=3676743/3657251, in_queue=7333994, util=99.93% 00:15:18.635 03:01:57 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.635 [2024-05-14 03:01:57.566362] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:18.635 [2024-05-14 03:01:57.597382] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:18.635 [2024-05-14 03:01:57.601349] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:18.635 [2024-05-14 03:01:57.611184] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:18.635 [2024-05-14 03:01:57.611359] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:18.635 [2024-05-14 03:01:57.611388] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.635 03:01:57 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.635 [2024-05-14 03:01:57.618349] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.635 [2024-05-14 03:01:57.619757] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:18.635 [2024-05-14 03:01:57.619804] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.635 03:01:57 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:18.635 03:01:57 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:18.635 03:01:57 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87988 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 87988 ']' 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 87988 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87988 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:18.635 killing process with pid 87988 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87988' 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@965 -- # kill 87988 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@970 -- # wait 87988 00:15:18.635 [2024-05-14 03:01:57.754164] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.635 [2024-05-14 03:01:57.754259] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:18.635 00:15:18.635 real 1m2.905s 00:15:18.635 user 1m42.165s 00:15:18.635 sys 0m32.085s 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:18.635 ************************************ 00:15:18.635 END TEST ublk_recovery 00:15:18.635 ************************************ 00:15:18.635 03:01:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.635 03:01:58 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:18.635 03:01:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.635 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:15:18.635 03:01:58 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@335 -- # '[' 1 -eq 1 ']' 00:15:18.635 03:01:58 -- spdk/autotest.sh@336 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.635 03:01:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:18.635 03:01:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:18.635 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:15:18.635 ************************************ 00:15:18.635 START TEST ftl 00:15:18.635 ************************************ 00:15:18.635 03:01:58 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.635 * Looking for test storage... 00:15:18.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.635 03:01:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.635 03:01:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.635 03:01:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.635 03:01:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.635 03:01:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.635 03:01:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.635 03:01:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.635 03:01:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.635 03:01:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.635 03:01:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.635 03:01:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:01:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.636 03:01:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.636 03:01:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.636 03:01:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.636 03:01:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.636 03:01:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.636 03:01:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:01:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:01:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.636 03:01:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.636 03:01:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.636 03:01:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.636 03:01:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.636 03:01:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.636 03:01:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.636 03:01:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.636 03:01:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.636 03:01:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.636 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.636 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.636 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.636 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88765 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:18.636 03:01:58 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88765 00:15:18.636 03:01:58 ftl -- common/autotest_common.sh@827 -- # '[' -z 88765 ']' 00:15:18.636 03:01:58 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.636 03:01:58 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.636 03:01:58 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.636 03:01:58 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.636 03:01:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.636 [2024-05-14 03:01:58.772160] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:15:18.636 [2024-05-14 03:01:58.772378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88765 ] 00:15:18.636 [2024-05-14 03:01:58.910841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:18.636 [2024-05-14 03:01:58.934418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.636 [2024-05-14 03:01:58.976368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.636 03:01:59 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:18.636 03:01:59 ftl -- common/autotest_common.sh@860 -- # return 0 00:15:18.636 03:01:59 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:18.636 03:01:59 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:18.636 03:02:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:18.636 03:02:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:18.636 03:02:00 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:18.636 03:02:00 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:18.636 03:02:00 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@50 -- # break 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@63 -- # break 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@66 -- # killprocess 88765 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@946 -- # '[' -z 88765 ']' 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@950 -- # kill -0 88765 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@951 -- # uname 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88765 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:18.636 killing process with pid 88765 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88765' 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@965 -- # kill 88765 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@970 -- # wait 88765 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:15:18.636 03:02:01 ftl -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:18.636 03:02:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.636 ************************************ 00:15:18.636 START TEST ftl_fio_basic 00:15:18.636 ************************************ 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:18.636 * Looking for test storage... 00:15:18.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:18.636 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88873 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88873 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 88873 ']' 00:15:18.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.637 03:02:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:18.637 [2024-05-14 03:02:01.831395] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:15:18.637 [2024-05-14 03:02:01.831556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88873 ] 00:15:18.637 [2024-05-14 03:02:01.973022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:18.637 [2024-05-14 03:02:01.993832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.637 [2024-05-14 03:02:02.031437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.637 [2024-05-14 03:02:02.031491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.637 [2024-05-14 03:02:02.031524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:18.637 03:02:02 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:18.637 { 00:15:18.637 "name": "nvme0n1", 00:15:18.637 "aliases": [ 00:15:18.637 "560e7498-c2cf-414e-bda4-f8d58581960e" 00:15:18.637 ], 00:15:18.637 "product_name": "NVMe disk", 00:15:18.637 "block_size": 4096, 00:15:18.637 "num_blocks": 1310720, 00:15:18.637 "uuid": "560e7498-c2cf-414e-bda4-f8d58581960e", 00:15:18.637 "assigned_rate_limits": { 00:15:18.637 "rw_ios_per_sec": 0, 00:15:18.637 "rw_mbytes_per_sec": 0, 00:15:18.637 "r_mbytes_per_sec": 0, 00:15:18.637 "w_mbytes_per_sec": 0 00:15:18.637 }, 00:15:18.637 "claimed": false, 00:15:18.637 "zoned": false, 00:15:18.637 "supported_io_types": { 00:15:18.637 "read": true, 00:15:18.637 "write": true, 00:15:18.637 "unmap": true, 00:15:18.637 "write_zeroes": true, 00:15:18.637 "flush": true, 00:15:18.637 "reset": true, 00:15:18.637 "compare": true, 00:15:18.637 "compare_and_write": false, 00:15:18.637 "abort": true, 00:15:18.637 "nvme_admin": true, 00:15:18.637 "nvme_io": true 00:15:18.637 }, 00:15:18.637 "driver_specific": { 00:15:18.637 "nvme": [ 00:15:18.637 { 00:15:18.637 "pci_address": "0000:00:11.0", 00:15:18.637 "trid": { 00:15:18.637 "trtype": "PCIe", 00:15:18.637 "traddr": "0000:00:11.0" 00:15:18.637 }, 00:15:18.637 "ctrlr_data": { 00:15:18.637 "cntlid": 0, 00:15:18.637 "vendor_id": "0x1b36", 00:15:18.637 "model_number": "QEMU NVMe Ctrl", 00:15:18.637 "serial_number": "12341", 00:15:18.637 "firmware_revision": "8.0.0", 00:15:18.637 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:18.637 "oacs": { 00:15:18.637 "security": 0, 00:15:18.637 "format": 1, 00:15:18.637 "firmware": 0, 00:15:18.637 "ns_manage": 1 00:15:18.637 }, 00:15:18.637 "multi_ctrlr": false, 00:15:18.637 "ana_reporting": false 00:15:18.637 }, 00:15:18.637 "vs": { 00:15:18.637 "nvme_version": "1.4" 00:15:18.637 }, 00:15:18.637 "ns_data": { 00:15:18.637 "id": 1, 00:15:18.637 "can_share": false 00:15:18.637 } 00:15:18.637 } 00:15:18.637 ], 00:15:18.637 "mp_policy": "active_passive" 00:15:18.637 } 00:15:18.637 } 00:15:18.637 ]' 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e8865572-fe6b-434c-ab1f-6c07a7f92ec0 00:15:18.637 03:02:03 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e8865572-fe6b-434c-ab1f-6c07a7f92ec0 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:18.637 { 00:15:18.637 "name": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:18.637 "aliases": [ 00:15:18.637 "lvs/nvme0n1p0" 00:15:18.637 ], 00:15:18.637 "product_name": "Logical Volume", 00:15:18.637 "block_size": 4096, 00:15:18.637 "num_blocks": 26476544, 00:15:18.637 "uuid": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:18.637 "assigned_rate_limits": { 00:15:18.637 "rw_ios_per_sec": 0, 00:15:18.637 "rw_mbytes_per_sec": 0, 00:15:18.637 "r_mbytes_per_sec": 0, 00:15:18.637 "w_mbytes_per_sec": 0 00:15:18.637 }, 00:15:18.637 "claimed": false, 00:15:18.637 "zoned": false, 00:15:18.637 "supported_io_types": { 00:15:18.637 "read": true, 00:15:18.637 "write": true, 00:15:18.637 "unmap": true, 00:15:18.637 "write_zeroes": true, 00:15:18.637 "flush": false, 00:15:18.637 "reset": true, 00:15:18.637 "compare": false, 00:15:18.637 "compare_and_write": false, 00:15:18.637 "abort": false, 00:15:18.637 "nvme_admin": false, 00:15:18.637 "nvme_io": false 00:15:18.637 }, 00:15:18.637 "driver_specific": { 00:15:18.637 "lvol": { 00:15:18.637 "lvol_store_uuid": "e8865572-fe6b-434c-ab1f-6c07a7f92ec0", 00:15:18.637 "base_bdev": "nvme0n1", 00:15:18.637 "thin_provision": true, 00:15:18.637 "num_allocated_clusters": 0, 00:15:18.637 "snapshot": false, 00:15:18.637 "clone": false, 00:15:18.637 "esnap_clone": false 00:15:18.637 } 00:15:18.637 } 00:15:18.637 } 00:15:18.637 ]' 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:18.637 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:18.638 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:18.638 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:18.638 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:18.638 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:18.638 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:18.638 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:18.895 03:02:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:19.153 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:19.153 { 00:15:19.153 "name": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:19.153 "aliases": [ 00:15:19.153 "lvs/nvme0n1p0" 00:15:19.153 ], 00:15:19.153 "product_name": "Logical Volume", 00:15:19.153 "block_size": 4096, 00:15:19.153 "num_blocks": 26476544, 00:15:19.153 "uuid": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:19.153 "assigned_rate_limits": { 00:15:19.153 "rw_ios_per_sec": 0, 00:15:19.153 "rw_mbytes_per_sec": 0, 00:15:19.153 "r_mbytes_per_sec": 0, 00:15:19.153 "w_mbytes_per_sec": 0 00:15:19.153 }, 00:15:19.153 "claimed": false, 00:15:19.153 "zoned": false, 00:15:19.153 "supported_io_types": { 00:15:19.153 "read": true, 00:15:19.153 "write": true, 00:15:19.153 "unmap": true, 00:15:19.153 "write_zeroes": true, 00:15:19.153 "flush": false, 00:15:19.153 "reset": true, 00:15:19.153 "compare": false, 00:15:19.153 "compare_and_write": false, 00:15:19.153 "abort": false, 00:15:19.153 "nvme_admin": false, 00:15:19.153 "nvme_io": false 00:15:19.153 }, 00:15:19.153 "driver_specific": { 00:15:19.153 "lvol": { 00:15:19.153 "lvol_store_uuid": "e8865572-fe6b-434c-ab1f-6c07a7f92ec0", 00:15:19.153 "base_bdev": "nvme0n1", 00:15:19.153 "thin_provision": true, 00:15:19.153 "num_allocated_clusters": 0, 00:15:19.153 "snapshot": false, 00:15:19.153 "clone": false, 00:15:19.153 "esnap_clone": false 00:15:19.153 } 00:15:19.153 } 00:15:19.153 } 00:15:19.153 ]' 00:15:19.153 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:19.153 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:19.153 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:19.412 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:19.412 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:19.412 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:19.412 03:02:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:19.412 03:02:05 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:19.671 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:19.671 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a151d6d1-ef81-404d-aa05-c0061a2f433d 00:15:19.930 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:19.930 { 00:15:19.930 "name": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:19.930 "aliases": [ 00:15:19.930 "lvs/nvme0n1p0" 00:15:19.930 ], 00:15:19.931 "product_name": "Logical Volume", 00:15:19.931 "block_size": 4096, 00:15:19.931 "num_blocks": 26476544, 00:15:19.931 "uuid": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:19.931 "assigned_rate_limits": { 00:15:19.931 "rw_ios_per_sec": 0, 00:15:19.931 "rw_mbytes_per_sec": 0, 00:15:19.931 "r_mbytes_per_sec": 0, 00:15:19.931 "w_mbytes_per_sec": 0 00:15:19.931 }, 00:15:19.931 "claimed": false, 00:15:19.931 "zoned": false, 00:15:19.931 "supported_io_types": { 00:15:19.931 "read": true, 00:15:19.931 "write": true, 00:15:19.931 "unmap": true, 00:15:19.931 "write_zeroes": true, 00:15:19.931 "flush": false, 00:15:19.931 "reset": true, 00:15:19.931 "compare": false, 00:15:19.931 "compare_and_write": false, 00:15:19.931 "abort": false, 00:15:19.931 "nvme_admin": false, 00:15:19.931 "nvme_io": false 00:15:19.931 }, 00:15:19.931 "driver_specific": { 00:15:19.931 "lvol": { 00:15:19.931 "lvol_store_uuid": "e8865572-fe6b-434c-ab1f-6c07a7f92ec0", 00:15:19.931 "base_bdev": "nvme0n1", 00:15:19.931 "thin_provision": true, 00:15:19.931 "num_allocated_clusters": 0, 00:15:19.931 "snapshot": false, 00:15:19.931 "clone": false, 00:15:19.931 "esnap_clone": false 00:15:19.931 } 00:15:19.931 } 00:15:19.931 } 00:15:19.931 ]' 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:19.931 03:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a151d6d1-ef81-404d-aa05-c0061a2f433d -c nvc0n1p0 --l2p_dram_limit 60 00:15:20.189 [2024-05-14 03:02:06.050062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.050127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:20.189 [2024-05-14 03:02:06.050199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:20.189 [2024-05-14 03:02:06.050215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.050314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.050336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:20.189 [2024-05-14 03:02:06.050352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:20.189 [2024-05-14 03:02:06.050369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.050424] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:20.189 [2024-05-14 03:02:06.050781] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:20.189 [2024-05-14 03:02:06.050817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.050834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:20.189 [2024-05-14 03:02:06.050847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:15:20.189 [2024-05-14 03:02:06.050860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.050997] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 67a0af06-0c7b-4ecd-a5be-c10753b9a46d 00:15:20.189 [2024-05-14 03:02:06.052120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.052328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:20.189 [2024-05-14 03:02:06.052537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:20.189 [2024-05-14 03:02:06.052709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.057421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.057649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:20.189 [2024-05-14 03:02:06.057781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:15:20.189 [2024-05-14 03:02:06.057836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.058029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.058091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:20.189 [2024-05-14 03:02:06.058228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:15:20.189 [2024-05-14 03:02:06.058361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.058513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.058568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:20.189 [2024-05-14 03:02:06.058711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:20.189 [2024-05-14 03:02:06.058761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.058886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:20.189 [2024-05-14 03:02:06.060489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.060668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:20.189 [2024-05-14 03:02:06.060695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:15:20.189 [2024-05-14 03:02:06.060727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.060780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.060798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:20.189 [2024-05-14 03:02:06.060811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:20.189 [2024-05-14 03:02:06.060827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.060859] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:20.189 [2024-05-14 03:02:06.061038] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:20.189 [2024-05-14 03:02:06.061061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:20.189 [2024-05-14 03:02:06.061079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:20.189 [2024-05-14 03:02:06.061094] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061110] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061123] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:20.189 [2024-05-14 03:02:06.061156] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:20.189 [2024-05-14 03:02:06.061173] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:20.189 [2024-05-14 03:02:06.061186] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:20.189 [2024-05-14 03:02:06.061198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.061212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:20.189 [2024-05-14 03:02:06.061224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:15:20.189 [2024-05-14 03:02:06.061236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.061319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.189 [2024-05-14 03:02:06.061339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:20.189 [2024-05-14 03:02:06.061351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:20.189 [2024-05-14 03:02:06.061364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.189 [2024-05-14 03:02:06.061499] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:20.189 [2024-05-14 03:02:06.061523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:20.189 [2024-05-14 03:02:06.061536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:20.189 [2024-05-14 03:02:06.061574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:20.189 [2024-05-14 03:02:06.061608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.189 [2024-05-14 03:02:06.061632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:20.189 [2024-05-14 03:02:06.061644] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:20.189 [2024-05-14 03:02:06.061655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.189 [2024-05-14 03:02:06.061670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:20.189 [2024-05-14 03:02:06.061680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:20.189 [2024-05-14 03:02:06.061692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:20.189 [2024-05-14 03:02:06.061716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:20.189 [2024-05-14 03:02:06.061726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:20.189 [2024-05-14 03:02:06.061752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:20.189 [2024-05-14 03:02:06.061765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:20.189 [2024-05-14 03:02:06.061788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:20.189 [2024-05-14 03:02:06.061822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:20.189 [2024-05-14 03:02:06.061860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:20.189 [2024-05-14 03:02:06.061912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.189 [2024-05-14 03:02:06.061934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:20.189 [2024-05-14 03:02:06.061946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:20.189 [2024-05-14 03:02:06.061957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.189 [2024-05-14 03:02:06.061969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:20.189 [2024-05-14 03:02:06.061980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:20.189 [2024-05-14 03:02:06.061992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.189 [2024-05-14 03:02:06.062002] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:20.189 [2024-05-14 03:02:06.062016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:20.189 [2024-05-14 03:02:06.062028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.189 [2024-05-14 03:02:06.062041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.189 [2024-05-14 03:02:06.062052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:20.189 [2024-05-14 03:02:06.062067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:20.189 [2024-05-14 03:02:06.062078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:20.189 [2024-05-14 03:02:06.062090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:20.189 [2024-05-14 03:02:06.062100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:20.189 [2024-05-14 03:02:06.062116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:20.190 [2024-05-14 03:02:06.062128] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:20.190 [2024-05-14 03:02:06.062161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.190 [2024-05-14 03:02:06.062180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:20.190 [2024-05-14 03:02:06.062194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:20.190 [2024-05-14 03:02:06.062206] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:20.190 [2024-05-14 03:02:06.062219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:20.190 [2024-05-14 03:02:06.062231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:20.190 [2024-05-14 03:02:06.062244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:20.190 [2024-05-14 03:02:06.062256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:20.190 [2024-05-14 03:02:06.062269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:20.190 [2024-05-14 03:02:06.062280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:20.190 [2024-05-14 03:02:06.062295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:20.190 [2024-05-14 03:02:06.062307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:20.190 [2024-05-14 03:02:06.062320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:20.190 [2024-05-14 03:02:06.062332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:20.190 [2024-05-14 03:02:06.062345] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:20.190 [2024-05-14 03:02:06.062357] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.190 [2024-05-14 03:02:06.062372] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:20.190 [2024-05-14 03:02:06.062384] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:20.190 [2024-05-14 03:02:06.062398] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:20.190 [2024-05-14 03:02:06.062410] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:20.190 [2024-05-14 03:02:06.062424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.062436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:20.190 [2024-05-14 03:02:06.062450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:15:20.190 [2024-05-14 03:02:06.062461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.068645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.068709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:20.190 [2024-05-14 03:02:06.068732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.082 ms 00:15:20.190 [2024-05-14 03:02:06.068760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.068868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.068884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:20.190 [2024-05-14 03:02:06.068899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:20.190 [2024-05-14 03:02:06.068909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.078064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.078117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:20.190 [2024-05-14 03:02:06.078211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.074 ms 00:15:20.190 [2024-05-14 03:02:06.078226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.078318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.078350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:20.190 [2024-05-14 03:02:06.078367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:20.190 [2024-05-14 03:02:06.078378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.078747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.078781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:20.190 [2024-05-14 03:02:06.078802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:15:20.190 [2024-05-14 03:02:06.078817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.078978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.078996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:20.190 [2024-05-14 03:02:06.079011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:15:20.190 [2024-05-14 03:02:06.079023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.093872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.093932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:20.190 [2024-05-14 03:02:06.093963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.785 ms 00:15:20.190 [2024-05-14 03:02:06.093980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.105453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:20.190 [2024-05-14 03:02:06.119030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.119133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:20.190 [2024-05-14 03:02:06.119183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.866 ms 00:15:20.190 [2024-05-14 03:02:06.119200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.168351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.190 [2024-05-14 03:02:06.168455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:20.190 [2024-05-14 03:02:06.168491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.056 ms 00:15:20.190 [2024-05-14 03:02:06.168508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.190 [2024-05-14 03:02:06.168601] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:20.190 [2024-05-14 03:02:06.168625] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:23.510 [2024-05-14 03:02:08.804524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.804602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:23.510 [2024-05-14 03:02:08.804658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2635.930 ms 00:15:23.510 [2024-05-14 03:02:08.804691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.804927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.804950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:23.510 [2024-05-14 03:02:08.804963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:15:23.510 [2024-05-14 03:02:08.804976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.808775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.808835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:23.510 [2024-05-14 03:02:08.808870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:15:23.510 [2024-05-14 03:02:08.808901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.812036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.812083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:23.510 [2024-05-14 03:02:08.812102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:15:23.510 [2024-05-14 03:02:08.812116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.812354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.812379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:23.510 [2024-05-14 03:02:08.812393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:15:23.510 [2024-05-14 03:02:08.812407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.838517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.838588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:23.510 [2024-05-14 03:02:08.838626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:15:23.510 [2024-05-14 03:02:08.838640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.843040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.843086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:23.510 [2024-05-14 03:02:08.843121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:15:23.510 [2024-05-14 03:02:08.843137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.510 [2024-05-14 03:02:08.847039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.510 [2024-05-14 03:02:08.847083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:23.511 [2024-05-14 03:02:08.847115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:15:23.511 [2024-05-14 03:02:08.847128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.511 [2024-05-14 03:02:08.850915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.511 [2024-05-14 03:02:08.850962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:23.511 [2024-05-14 03:02:08.850997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:15:23.511 [2024-05-14 03:02:08.851010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.511 [2024-05-14 03:02:08.851067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.511 [2024-05-14 03:02:08.851105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:23.511 [2024-05-14 03:02:08.851117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:23.511 [2024-05-14 03:02:08.851183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.511 [2024-05-14 03:02:08.851285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.511 [2024-05-14 03:02:08.851309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:23.511 [2024-05-14 03:02:08.851322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:15:23.511 [2024-05-14 03:02:08.851338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.511 [2024-05-14 03:02:08.852568] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2801.952 ms, result 0 00:15:23.511 { 00:15:23.511 "name": "ftl0", 00:15:23.511 "uuid": "67a0af06-0c7b-4ecd-a5be-c10753b9a46d" 00:15:23.511 } 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:23.511 03:02:08 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:23.511 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:23.511 [ 00:15:23.511 { 00:15:23.511 "name": "ftl0", 00:15:23.511 "aliases": [ 00:15:23.511 "67a0af06-0c7b-4ecd-a5be-c10753b9a46d" 00:15:23.511 ], 00:15:23.511 "product_name": "FTL disk", 00:15:23.511 "block_size": 4096, 00:15:23.511 "num_blocks": 20971520, 00:15:23.511 "uuid": "67a0af06-0c7b-4ecd-a5be-c10753b9a46d", 00:15:23.511 "assigned_rate_limits": { 00:15:23.511 "rw_ios_per_sec": 0, 00:15:23.511 "rw_mbytes_per_sec": 0, 00:15:23.511 "r_mbytes_per_sec": 0, 00:15:23.511 "w_mbytes_per_sec": 0 00:15:23.511 }, 00:15:23.511 "claimed": false, 00:15:23.511 "zoned": false, 00:15:23.511 "supported_io_types": { 00:15:23.511 "read": true, 00:15:23.511 "write": true, 00:15:23.511 "unmap": true, 00:15:23.511 "write_zeroes": true, 00:15:23.511 "flush": true, 00:15:23.511 "reset": false, 00:15:23.511 "compare": false, 00:15:23.511 "compare_and_write": false, 00:15:23.511 "abort": false, 00:15:23.511 "nvme_admin": false, 00:15:23.511 "nvme_io": false 00:15:23.511 }, 00:15:23.511 "driver_specific": { 00:15:23.511 "ftl": { 00:15:23.511 "base_bdev": "a151d6d1-ef81-404d-aa05-c0061a2f433d", 00:15:23.511 "cache": "nvc0n1p0" 00:15:23.511 } 00:15:23.511 } 00:15:23.511 } 00:15:23.511 ] 00:15:23.511 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:15:23.511 03:02:09 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:23.511 03:02:09 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:23.769 03:02:09 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:23.769 03:02:09 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:23.769 [2024-05-14 03:02:09.790307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.769 [2024-05-14 03:02:09.790370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:23.769 [2024-05-14 03:02:09.790399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:23.769 [2024-05-14 03:02:09.790413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.769 [2024-05-14 03:02:09.790474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:23.769 [2024-05-14 03:02:09.790901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.769 [2024-05-14 03:02:09.790922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:23.769 [2024-05-14 03:02:09.790938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:15:23.769 [2024-05-14 03:02:09.790951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.769 [2024-05-14 03:02:09.791449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.769 [2024-05-14 03:02:09.791482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:23.769 [2024-05-14 03:02:09.791497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:15:23.769 [2024-05-14 03:02:09.791527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.769 [2024-05-14 03:02:09.794864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.769 [2024-05-14 03:02:09.794904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:23.769 [2024-05-14 03:02:09.794921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:15:23.769 [2024-05-14 03:02:09.794937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.801982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.802018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:24.029 [2024-05-14 03:02:09.802050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.997 ms 00:15:24.029 [2024-05-14 03:02:09.802063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.803446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.803498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:24.029 [2024-05-14 03:02:09.803515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:15:24.029 [2024-05-14 03:02:09.803528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.807408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.807459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:24.029 [2024-05-14 03:02:09.807480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:15:24.029 [2024-05-14 03:02:09.807494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.807685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.807731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:24.029 [2024-05-14 03:02:09.807746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:15:24.029 [2024-05-14 03:02:09.807759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.809396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.809440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:24.029 [2024-05-14 03:02:09.809456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:15:24.029 [2024-05-14 03:02:09.809469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.810795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.810854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:24.029 [2024-05-14 03:02:09.810888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:15:24.029 [2024-05-14 03:02:09.810902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.812029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.812102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:24.029 [2024-05-14 03:02:09.812132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:15:24.029 [2024-05-14 03:02:09.812159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.813391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.029 [2024-05-14 03:02:09.813437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:24.029 [2024-05-14 03:02:09.813452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:15:24.029 [2024-05-14 03:02:09.813465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.029 [2024-05-14 03:02:09.813517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:24.029 [2024-05-14 03:02:09.813544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:24.029 [2024-05-14 03:02:09.813770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.813989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:24.030 [2024-05-14 03:02:09.814914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:24.030 [2024-05-14 03:02:09.814930] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67a0af06-0c7b-4ecd-a5be-c10753b9a46d 00:15:24.030 [2024-05-14 03:02:09.814945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:24.030 [2024-05-14 03:02:09.814956] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:24.030 [2024-05-14 03:02:09.814969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:24.030 [2024-05-14 03:02:09.814980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:24.030 [2024-05-14 03:02:09.814994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:24.030 [2024-05-14 03:02:09.815005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:24.030 [2024-05-14 03:02:09.815040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:24.030 [2024-05-14 03:02:09.815052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:24.031 [2024-05-14 03:02:09.815064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:24.031 [2024-05-14 03:02:09.815075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.031 [2024-05-14 03:02:09.815088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:24.031 [2024-05-14 03:02:09.815114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:15:24.031 [2024-05-14 03:02:09.815128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.816686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.031 [2024-05-14 03:02:09.816720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:24.031 [2024-05-14 03:02:09.816750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:15:24.031 [2024-05-14 03:02:09.816762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.816843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.031 [2024-05-14 03:02:09.816863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:24.031 [2024-05-14 03:02:09.816876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:24.031 [2024-05-14 03:02:09.816892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.822467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.822521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:24.031 [2024-05-14 03:02:09.822539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.822553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.822623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.822642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:24.031 [2024-05-14 03:02:09.822669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.822684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.822792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.822817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:24.031 [2024-05-14 03:02:09.822830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.822846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.822878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.822905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:24.031 [2024-05-14 03:02:09.822917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.822930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.832443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.832554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:24.031 [2024-05-14 03:02:09.832574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.832607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.836381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.836424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:24.031 [2024-05-14 03:02:09.836472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.836487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.836585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.836607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:24.031 [2024-05-14 03:02:09.836620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.836633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.836712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.836731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:24.031 [2024-05-14 03:02:09.836744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.836756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.836863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.836887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:24.031 [2024-05-14 03:02:09.836899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.836912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.836978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.837000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:24.031 [2024-05-14 03:02:09.837012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.837025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.837078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.837100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:24.031 [2024-05-14 03:02:09.837127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.837179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.837252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.031 [2024-05-14 03:02:09.837291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:24.031 [2024-05-14 03:02:09.837304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.031 [2024-05-14 03:02:09.837317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.031 [2024-05-14 03:02:09.837537] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.185 ms, result 0 00:15:24.031 true 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88873 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 88873 ']' 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 88873 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88873 00:15:24.031 killing process with pid 88873 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88873' 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 88873 00:15:24.031 03:02:09 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 88873 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.314 03:02:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:27.314 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:27.314 fio-3.35 00:15:27.314 Starting 1 thread 00:15:32.593 00:15:32.593 test: (groupid=0, jobs=1): err= 0: pid=89047: Tue May 14 03:02:17 2024 00:15:32.593 read: IOPS=931, BW=61.9MiB/s (64.9MB/s)(255MiB/4113msec) 00:15:32.593 slat (usec): min=5, max=147, avg= 7.66, stdev= 4.19 00:15:32.593 clat (usec): min=321, max=870, avg=475.16, stdev=45.37 00:15:32.593 lat (usec): min=328, max=885, avg=482.82, stdev=46.18 00:15:32.593 clat percentiles (usec): 00:15:32.593 | 1.00th=[ 379], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 441], 00:15:32.593 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:15:32.593 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 562], 00:15:32.593 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 742], 99.95th=[ 758], 00:15:32.593 | 99.99th=[ 873] 00:15:32.593 write: IOPS=938, BW=62.3MiB/s (65.4MB/s)(256MiB/4108msec); 0 zone resets 00:15:32.593 slat (usec): min=19, max=275, avg=24.89, stdev= 7.37 00:15:32.593 clat (usec): min=373, max=1143, avg=547.19, stdev=61.43 00:15:32.593 lat (usec): min=413, max=1170, avg=572.07, stdev=62.25 00:15:32.593 clat percentiles (usec): 00:15:32.593 | 1.00th=[ 445], 5.00th=[ 461], 10.00th=[ 478], 20.00th=[ 506], 00:15:32.594 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:15:32.594 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 635], 00:15:32.594 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 963], 00:15:32.594 | 99.99th=[ 1139] 00:15:32.594 bw ( KiB/s): min=61336, max=65280, per=99.90%, avg=63767.00, stdev=1325.44, samples=8 00:15:32.594 iops : min= 902, max= 960, avg=937.75, stdev=19.49, samples=8 00:15:32.594 lat (usec) : 500=47.65%, 750=51.53%, 1000=0.81% 00:15:32.594 lat (msec) : 2=0.01% 00:15:32.594 cpu : usr=98.44%, sys=0.41%, ctx=7, majf=0, minf=1181 00:15:32.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.594 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:32.594 00:15:32.594 Run status group 0 (all jobs): 00:15:32.594 READ: bw=61.9MiB/s (64.9MB/s), 61.9MiB/s-61.9MiB/s (64.9MB/s-64.9MB/s), io=255MiB (267MB), run=4113-4113msec 00:15:32.594 WRITE: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=256MiB (269MB), run=4108-4108msec 00:15:32.594 ----------------------------------------------------- 00:15:32.594 Suppressions used: 00:15:32.594 count bytes template 00:15:32.594 1 5 /usr/src/fio/parse.c 00:15:32.594 1 8 libtcmalloc_minimal.so 00:15:32.594 1 904 libcrypto.so 00:15:32.594 ----------------------------------------------------- 00:15:32.594 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:32.594 03:02:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:32.853 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:32.853 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:32.853 fio-3.35 00:15:32.853 Starting 2 threads 00:16:04.941 00:16:04.941 first_half: (groupid=0, jobs=1): err= 0: pid=89139: Tue May 14 03:02:47 2024 00:16:04.941 read: IOPS=2305, BW=9223KiB/s (9445kB/s)(255MiB/28292msec) 00:16:04.941 slat (usec): min=4, max=511, avg= 7.61, stdev= 4.35 00:16:04.941 clat (usec): min=913, max=318173, avg=43159.84, stdev=22031.99 00:16:04.941 lat (usec): min=921, max=318178, avg=43167.45, stdev=22032.15 00:16:04.941 clat percentiles (msec): 00:16:04.941 | 1.00th=[ 7], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 38], 00:16:04.941 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:16:04.941 | 70.00th=[ 41], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 58], 00:16:04.941 | 99.00th=[ 171], 99.50th=[ 194], 99.90th=[ 230], 99.95th=[ 251], 00:16:04.941 | 99.99th=[ 309] 00:16:04.941 write: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(256MiB/19879msec); 0 zone resets 00:16:04.941 slat (usec): min=5, max=804, avg= 9.45, stdev= 7.58 00:16:04.941 clat (usec): min=461, max=118231, avg=12265.15, stdev=22485.41 00:16:04.941 lat (usec): min=474, max=118252, avg=12274.60, stdev=22485.49 00:16:04.941 clat percentiles (usec): 00:16:04.941 | 1.00th=[ 1004], 5.00th=[ 1319], 10.00th=[ 1516], 20.00th=[ 1795], 00:16:04.941 | 30.00th=[ 2114], 40.00th=[ 2868], 50.00th=[ 4359], 60.00th=[ 5997], 00:16:04.941 | 70.00th=[ 7701], 80.00th=[ 12780], 90.00th=[ 17695], 95.00th=[ 84411], 00:16:04.941 | 99.00th=[ 96994], 99.50th=[103285], 99.90th=[113771], 99.95th=[115868], 00:16:04.941 | 99.99th=[117965] 00:16:04.941 bw ( KiB/s): min= 6752, max=52616, per=100.00%, avg=24966.10, stdev=12477.27, samples=21 00:16:04.941 iops : min= 1688, max=13154, avg=6241.52, stdev=3119.32, samples=21 00:16:04.941 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.43% 00:16:04.941 lat (msec) : 2=13.05%, 4=10.60%, 10=13.80%, 20=8.21%, 50=46.17% 00:16:04.941 lat (msec) : 100=5.85%, 250=1.78%, 500=0.03% 00:16:04.941 cpu : usr=98.27%, sys=0.58%, ctx=114, majf=0, minf=5597 00:16:04.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:04.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.941 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.941 issued rwts: total=65237,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.941 second_half: (groupid=0, jobs=1): err= 0: pid=89140: Tue May 14 03:02:47 2024 00:16:04.941 read: IOPS=2290, BW=9163KiB/s (9383kB/s)(255MiB/28512msec) 00:16:04.941 slat (nsec): min=4462, max=40338, avg=7331.43, stdev=2050.55 00:16:04.941 clat (usec): min=1156, max=324780, avg=41797.75, stdev=20278.43 00:16:04.941 lat (usec): min=1161, max=324788, avg=41805.08, stdev=20278.63 00:16:04.941 clat percentiles (msec): 00:16:04.941 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 38], 20.00th=[ 38], 00:16:04.941 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:16:04.941 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 56], 00:16:04.941 | 99.00th=[ 155], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 259], 00:16:04.941 | 99.99th=[ 317] 00:16:04.941 write: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(256MiB/26034msec); 0 zone resets 00:16:04.941 slat (usec): min=5, max=482, avg= 9.02, stdev= 5.30 00:16:04.941 clat (usec): min=454, max=120557, avg=14003.69, stdev=23011.07 00:16:04.941 lat (usec): min=469, max=120568, avg=14012.71, stdev=23011.29 00:16:04.941 clat percentiles (usec): 00:16:04.941 | 1.00th=[ 898], 5.00th=[ 1221], 10.00th=[ 1500], 20.00th=[ 1991], 00:16:04.941 | 30.00th=[ 3785], 40.00th=[ 5800], 50.00th=[ 6915], 60.00th=[ 7701], 00:16:04.941 | 70.00th=[ 8979], 80.00th=[ 13173], 90.00th=[ 38011], 95.00th=[ 86508], 00:16:04.941 | 99.00th=[ 99091], 99.50th=[106431], 99.90th=[115868], 99.95th=[116917], 00:16:04.941 | 99.99th=[119014] 00:16:04.941 bw ( KiB/s): min= 16, max=49304, per=89.78%, avg=18081.21, stdev=12818.14, samples=29 00:16:04.941 iops : min= 4, max=12326, avg=4520.28, stdev=3204.50, samples=29 00:16:04.941 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.95% 00:16:04.941 lat (msec) : 2=9.09%, 4=5.57%, 10=21.22%, 20=9.46%, 50=46.38% 00:16:04.941 lat (msec) : 100=5.63%, 250=1.59%, 500=0.03% 00:16:04.941 cpu : usr=99.03%, sys=0.29%, ctx=47, majf=0, minf=5535 00:16:04.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:04.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.941 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.941 issued rwts: total=65317,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.941 00:16:04.941 Run status group 0 (all jobs): 00:16:04.941 READ: bw=17.9MiB/s (18.8MB/s), 9163KiB/s-9223KiB/s (9383kB/s-9445kB/s), io=510MiB (535MB), run=28292-28512msec 00:16:04.941 WRITE: bw=19.7MiB/s (20.6MB/s), 9.83MiB/s-12.9MiB/s (10.3MB/s-13.5MB/s), io=512MiB (537MB), run=19879-26034msec 00:16:04.941 ----------------------------------------------------- 00:16:04.941 Suppressions used: 00:16:04.941 count bytes template 00:16:04.941 2 10 /usr/src/fio/parse.c 00:16:04.941 4 384 /usr/src/fio/iolog.c 00:16:04.941 1 8 libtcmalloc_minimal.so 00:16:04.941 1 904 libcrypto.so 00:16:04.941 ----------------------------------------------------- 00:16:04.941 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.941 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.942 03:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:04.942 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:04.942 fio-3.35 00:16:04.942 Starting 1 thread 00:16:19.818 00:16:19.818 test: (groupid=0, jobs=1): err= 0: pid=89486: Tue May 14 03:03:05 2024 00:16:19.818 read: IOPS=6570, BW=25.7MiB/s (26.9MB/s)(255MiB/9924msec) 00:16:19.818 slat (nsec): min=4413, max=36101, avg=6482.47, stdev=1861.61 00:16:19.818 clat (usec): min=728, max=38908, avg=19470.73, stdev=905.73 00:16:19.818 lat (usec): min=733, max=38915, avg=19477.21, stdev=905.72 00:16:19.818 clat percentiles (usec): 00:16:19.818 | 1.00th=[18482], 5.00th=[18744], 10.00th=[18744], 20.00th=[19006], 00:16:19.818 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:16:19.818 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:16:19.818 | 99.00th=[22414], 99.50th=[22676], 99.90th=[29492], 99.95th=[34341], 00:16:19.818 | 99.99th=[38011] 00:16:19.818 write: IOPS=12.0k, BW=46.9MiB/s (49.1MB/s)(256MiB/5462msec); 0 zone resets 00:16:19.818 slat (usec): min=5, max=539, avg= 8.97, stdev= 5.46 00:16:19.818 clat (usec): min=588, max=60002, avg=10610.44, stdev=13334.22 00:16:19.818 lat (usec): min=597, max=60010, avg=10619.41, stdev=13334.24 00:16:19.818 clat percentiles (usec): 00:16:19.818 | 1.00th=[ 947], 5.00th=[ 1139], 10.00th=[ 1270], 20.00th=[ 1450], 00:16:19.818 | 30.00th=[ 1663], 40.00th=[ 2089], 50.00th=[ 7046], 60.00th=[ 7963], 00:16:19.818 | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[38536], 95.00th=[41681], 00:16:19.818 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49021], 99.95th=[50070], 00:16:19.818 | 99.99th=[57410] 00:16:19.818 bw ( KiB/s): min=39032, max=64944, per=99.29%, avg=47651.09, stdev=9607.17, samples=11 00:16:19.818 iops : min= 9758, max=16236, avg=11912.73, stdev=2401.71, samples=11 00:16:19.818 lat (usec) : 750=0.02%, 1000=0.80% 00:16:19.818 lat (msec) : 2=18.76%, 4=1.41%, 10=16.81%, 20=47.73%, 50=14.44% 00:16:19.818 lat (msec) : 100=0.03% 00:16:19.818 cpu : usr=98.82%, sys=0.42%, ctx=45, majf=0, minf=5577 00:16:19.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:19.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.818 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.818 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.818 00:16:19.818 Run status group 0 (all jobs): 00:16:19.818 READ: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=255MiB (267MB), run=9924-9924msec 00:16:19.818 WRITE: bw=46.9MiB/s (49.1MB/s), 46.9MiB/s-46.9MiB/s (49.1MB/s-49.1MB/s), io=256MiB (268MB), run=5462-5462msec 00:16:20.085 ----------------------------------------------------- 00:16:20.085 Suppressions used: 00:16:20.085 count bytes template 00:16:20.085 1 5 /usr/src/fio/parse.c 00:16:20.085 2 192 /usr/src/fio/iolog.c 00:16:20.085 1 8 libtcmalloc_minimal.so 00:16:20.085 1 904 libcrypto.so 00:16:20.085 ----------------------------------------------------- 00:16:20.085 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.085 Remove shared memory files 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid75077 /dev/shm/spdk_tgt_trace.pid87844 00:16:20.085 03:03:06 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:20.347 03:03:06 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:20.347 ************************************ 00:16:20.347 END TEST ftl_fio_basic 00:16:20.347 ************************************ 00:16:20.347 00:16:20.347 real 1m4.478s 00:16:20.347 user 2m25.530s 00:16:20.347 sys 0m3.585s 00:16:20.347 03:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:20.347 03:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 03:03:06 ftl -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:20.347 03:03:06 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:20.347 03:03:06 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.347 03:03:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 ************************************ 00:16:20.347 START TEST ftl_bdevperf 00:16:20.347 ************************************ 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:20.347 * Looking for test storage... 00:16:20.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:20.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=89727 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 89727 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 89727 ']' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:20.347 03:03:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:20.605 [2024-05-14 03:03:06.420253] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:16:20.605 [2024-05-14 03:03:06.421014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89727 ] 00:16:20.605 [2024-05-14 03:03:06.568899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:20.605 [2024-05-14 03:03:06.590402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.864 [2024-05-14 03:03:06.634025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:21.430 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:21.688 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:21.950 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:21.950 { 00:16:21.950 "name": "nvme0n1", 00:16:21.950 "aliases": [ 00:16:21.950 "4125702c-b740-4cf1-b122-d7d94f751dbe" 00:16:21.950 ], 00:16:21.950 "product_name": "NVMe disk", 00:16:21.950 "block_size": 4096, 00:16:21.950 "num_blocks": 1310720, 00:16:21.950 "uuid": "4125702c-b740-4cf1-b122-d7d94f751dbe", 00:16:21.950 "assigned_rate_limits": { 00:16:21.950 "rw_ios_per_sec": 0, 00:16:21.950 "rw_mbytes_per_sec": 0, 00:16:21.950 "r_mbytes_per_sec": 0, 00:16:21.950 "w_mbytes_per_sec": 0 00:16:21.950 }, 00:16:21.950 "claimed": true, 00:16:21.950 "claim_type": "read_many_write_one", 00:16:21.950 "zoned": false, 00:16:21.950 "supported_io_types": { 00:16:21.950 "read": true, 00:16:21.950 "write": true, 00:16:21.950 "unmap": true, 00:16:21.950 "write_zeroes": true, 00:16:21.950 "flush": true, 00:16:21.950 "reset": true, 00:16:21.950 "compare": true, 00:16:21.950 "compare_and_write": false, 00:16:21.950 "abort": true, 00:16:21.950 "nvme_admin": true, 00:16:21.950 "nvme_io": true 00:16:21.950 }, 00:16:21.950 "driver_specific": { 00:16:21.950 "nvme": [ 00:16:21.950 { 00:16:21.950 "pci_address": "0000:00:11.0", 00:16:21.950 "trid": { 00:16:21.950 "trtype": "PCIe", 00:16:21.950 "traddr": "0000:00:11.0" 00:16:21.950 }, 00:16:21.950 "ctrlr_data": { 00:16:21.950 "cntlid": 0, 00:16:21.950 "vendor_id": "0x1b36", 00:16:21.950 "model_number": "QEMU NVMe Ctrl", 00:16:21.950 "serial_number": "12341", 00:16:21.950 "firmware_revision": "8.0.0", 00:16:21.950 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:21.950 "oacs": { 00:16:21.950 "security": 0, 00:16:21.950 "format": 1, 00:16:21.950 "firmware": 0, 00:16:21.950 "ns_manage": 1 00:16:21.950 }, 00:16:21.950 "multi_ctrlr": false, 00:16:21.950 "ana_reporting": false 00:16:21.950 }, 00:16:21.950 "vs": { 00:16:21.950 "nvme_version": "1.4" 00:16:21.950 }, 00:16:21.950 "ns_data": { 00:16:21.950 "id": 1, 00:16:21.950 "can_share": false 00:16:21.950 } 00:16:21.950 } 00:16:21.950 ], 00:16:21.950 "mp_policy": "active_passive" 00:16:21.950 } 00:16:21.950 } 00:16:21.950 ]' 00:16:21.950 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:21.950 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:21.950 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:22.207 03:03:07 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:22.207 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:22.465 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e8865572-fe6b-434c-ab1f-6c07a7f92ec0 00:16:22.465 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:22.465 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8865572-fe6b-434c-ab1f-6c07a7f92ec0 00:16:22.465 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:22.723 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b2851337-5c0b-4fbb-bc7e-eff1521bae32 00:16:22.723 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2851337-5c0b-4fbb-bc7e-eff1521bae32 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:22.982 03:03:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:23.240 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:23.240 { 00:16:23.240 "name": "8950bec7-19dc-4d36-86c6-26e1814053fb", 00:16:23.240 "aliases": [ 00:16:23.240 "lvs/nvme0n1p0" 00:16:23.240 ], 00:16:23.240 "product_name": "Logical Volume", 00:16:23.240 "block_size": 4096, 00:16:23.240 "num_blocks": 26476544, 00:16:23.240 "uuid": "8950bec7-19dc-4d36-86c6-26e1814053fb", 00:16:23.240 "assigned_rate_limits": { 00:16:23.240 "rw_ios_per_sec": 0, 00:16:23.240 "rw_mbytes_per_sec": 0, 00:16:23.240 "r_mbytes_per_sec": 0, 00:16:23.240 "w_mbytes_per_sec": 0 00:16:23.240 }, 00:16:23.240 "claimed": false, 00:16:23.240 "zoned": false, 00:16:23.240 "supported_io_types": { 00:16:23.240 "read": true, 00:16:23.240 "write": true, 00:16:23.240 "unmap": true, 00:16:23.240 "write_zeroes": true, 00:16:23.240 "flush": false, 00:16:23.240 "reset": true, 00:16:23.240 "compare": false, 00:16:23.240 "compare_and_write": false, 00:16:23.240 "abort": false, 00:16:23.240 "nvme_admin": false, 00:16:23.240 "nvme_io": false 00:16:23.240 }, 00:16:23.240 "driver_specific": { 00:16:23.240 "lvol": { 00:16:23.240 "lvol_store_uuid": "b2851337-5c0b-4fbb-bc7e-eff1521bae32", 00:16:23.240 "base_bdev": "nvme0n1", 00:16:23.240 "thin_provision": true, 00:16:23.240 "num_allocated_clusters": 0, 00:16:23.240 "snapshot": false, 00:16:23.240 "clone": false, 00:16:23.240 "esnap_clone": false 00:16:23.240 } 00:16:23.240 } 00:16:23.240 } 00:16:23.240 ]' 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:23.498 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:23.755 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:24.021 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:24.021 { 00:16:24.021 "name": "8950bec7-19dc-4d36-86c6-26e1814053fb", 00:16:24.021 "aliases": [ 00:16:24.021 "lvs/nvme0n1p0" 00:16:24.021 ], 00:16:24.021 "product_name": "Logical Volume", 00:16:24.021 "block_size": 4096, 00:16:24.021 "num_blocks": 26476544, 00:16:24.021 "uuid": "8950bec7-19dc-4d36-86c6-26e1814053fb", 00:16:24.021 "assigned_rate_limits": { 00:16:24.021 "rw_ios_per_sec": 0, 00:16:24.021 "rw_mbytes_per_sec": 0, 00:16:24.021 "r_mbytes_per_sec": 0, 00:16:24.021 "w_mbytes_per_sec": 0 00:16:24.021 }, 00:16:24.021 "claimed": false, 00:16:24.021 "zoned": false, 00:16:24.021 "supported_io_types": { 00:16:24.021 "read": true, 00:16:24.021 "write": true, 00:16:24.021 "unmap": true, 00:16:24.021 "write_zeroes": true, 00:16:24.022 "flush": false, 00:16:24.022 "reset": true, 00:16:24.022 "compare": false, 00:16:24.022 "compare_and_write": false, 00:16:24.022 "abort": false, 00:16:24.022 "nvme_admin": false, 00:16:24.022 "nvme_io": false 00:16:24.022 }, 00:16:24.022 "driver_specific": { 00:16:24.022 "lvol": { 00:16:24.022 "lvol_store_uuid": "b2851337-5c0b-4fbb-bc7e-eff1521bae32", 00:16:24.022 "base_bdev": "nvme0n1", 00:16:24.022 "thin_provision": true, 00:16:24.022 "num_allocated_clusters": 0, 00:16:24.022 "snapshot": false, 00:16:24.022 "clone": false, 00:16:24.022 "esnap_clone": false 00:16:24.022 } 00:16:24.022 } 00:16:24.022 } 00:16:24.022 ]' 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:24.022 03:03:09 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:24.283 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8950bec7-19dc-4d36-86c6-26e1814053fb 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:24.546 { 00:16:24.546 "name": "8950bec7-19dc-4d36-86c6-26e1814053fb", 00:16:24.546 "aliases": [ 00:16:24.546 "lvs/nvme0n1p0" 00:16:24.546 ], 00:16:24.546 "product_name": "Logical Volume", 00:16:24.546 "block_size": 4096, 00:16:24.546 "num_blocks": 26476544, 00:16:24.546 "uuid": "8950bec7-19dc-4d36-86c6-26e1814053fb", 00:16:24.546 "assigned_rate_limits": { 00:16:24.546 "rw_ios_per_sec": 0, 00:16:24.546 "rw_mbytes_per_sec": 0, 00:16:24.546 "r_mbytes_per_sec": 0, 00:16:24.546 "w_mbytes_per_sec": 0 00:16:24.546 }, 00:16:24.546 "claimed": false, 00:16:24.546 "zoned": false, 00:16:24.546 "supported_io_types": { 00:16:24.546 "read": true, 00:16:24.546 "write": true, 00:16:24.546 "unmap": true, 00:16:24.546 "write_zeroes": true, 00:16:24.546 "flush": false, 00:16:24.546 "reset": true, 00:16:24.546 "compare": false, 00:16:24.546 "compare_and_write": false, 00:16:24.546 "abort": false, 00:16:24.546 "nvme_admin": false, 00:16:24.546 "nvme_io": false 00:16:24.546 }, 00:16:24.546 "driver_specific": { 00:16:24.546 "lvol": { 00:16:24.546 "lvol_store_uuid": "b2851337-5c0b-4fbb-bc7e-eff1521bae32", 00:16:24.546 "base_bdev": "nvme0n1", 00:16:24.546 "thin_provision": true, 00:16:24.546 "num_allocated_clusters": 0, 00:16:24.546 "snapshot": false, 00:16:24.546 "clone": false, 00:16:24.546 "esnap_clone": false 00:16:24.546 } 00:16:24.546 } 00:16:24.546 } 00:16:24.546 ]' 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:24.546 03:03:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8950bec7-19dc-4d36-86c6-26e1814053fb -c nvc0n1p0 --l2p_dram_limit 20 00:16:24.818 [2024-05-14 03:03:10.730927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.730995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.818 [2024-05-14 03:03:10.731035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.818 [2024-05-14 03:03:10.731047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.731120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.731137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.818 [2024-05-14 03:03:10.731206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:24.818 [2024-05-14 03:03:10.731224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.731274] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.818 [2024-05-14 03:03:10.731685] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.818 [2024-05-14 03:03:10.731742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.731780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.818 [2024-05-14 03:03:10.731809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:16:24.818 [2024-05-14 03:03:10.731827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.732076] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 178d789f-b3a1-433f-9662-f53bd4afbead 00:16:24.818 [2024-05-14 03:03:10.733106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.733158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:24.818 [2024-05-14 03:03:10.733175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:24.818 [2024-05-14 03:03:10.733192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.737927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.738008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.818 [2024-05-14 03:03:10.738028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:16:24.818 [2024-05-14 03:03:10.738044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.738204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.738230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.818 [2024-05-14 03:03:10.738247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:24.818 [2024-05-14 03:03:10.738262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.738348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.738383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.818 [2024-05-14 03:03:10.738398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:24.818 [2024-05-14 03:03:10.738412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.738442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.818 [2024-05-14 03:03:10.740164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.740213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.818 [2024-05-14 03:03:10.740243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:16:24.818 [2024-05-14 03:03:10.740258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.740316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.740333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.818 [2024-05-14 03:03:10.740350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:24.818 [2024-05-14 03:03:10.740362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.740397] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:24.818 [2024-05-14 03:03:10.740530] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:24.818 [2024-05-14 03:03:10.740574] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.818 [2024-05-14 03:03:10.740592] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:24.818 [2024-05-14 03:03:10.740610] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.818 [2024-05-14 03:03:10.740624] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.818 [2024-05-14 03:03:10.740638] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:24.818 [2024-05-14 03:03:10.740650] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.818 [2024-05-14 03:03:10.740663] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:24.818 [2024-05-14 03:03:10.740677] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:24.818 [2024-05-14 03:03:10.740691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.740705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.818 [2024-05-14 03:03:10.740719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:16:24.818 [2024-05-14 03:03:10.740730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.740819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.818 [2024-05-14 03:03:10.740845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.818 [2024-05-14 03:03:10.740862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:24.818 [2024-05-14 03:03:10.740874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.818 [2024-05-14 03:03:10.740960] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.818 [2024-05-14 03:03:10.740990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.818 [2024-05-14 03:03:10.741018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.818 [2024-05-14 03:03:10.741031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.818 [2024-05-14 03:03:10.741044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.818 [2024-05-14 03:03:10.741056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.818 [2024-05-14 03:03:10.741069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:24.818 [2024-05-14 03:03:10.741080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.818 [2024-05-14 03:03:10.741095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.819 [2024-05-14 03:03:10.741118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.819 [2024-05-14 03:03:10.741156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:24.819 [2024-05-14 03:03:10.741174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.819 [2024-05-14 03:03:10.741190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.819 [2024-05-14 03:03:10.741204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:24.819 [2024-05-14 03:03:10.741215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.819 [2024-05-14 03:03:10.741239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:24.819 [2024-05-14 03:03:10.741252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:24.819 [2024-05-14 03:03:10.741279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:24.819 [2024-05-14 03:03:10.741291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.819 [2024-05-14 03:03:10.741317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.819 [2024-05-14 03:03:10.741356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.819 [2024-05-14 03:03:10.741393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.819 [2024-05-14 03:03:10.741429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.819 [2024-05-14 03:03:10.741463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.819 [2024-05-14 03:03:10.741486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.819 [2024-05-14 03:03:10.741499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:24.819 [2024-05-14 03:03:10.741509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.819 [2024-05-14 03:03:10.741523] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.819 [2024-05-14 03:03:10.741535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.819 [2024-05-14 03:03:10.741549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.819 [2024-05-14 03:03:10.741594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.819 [2024-05-14 03:03:10.741608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.819 [2024-05-14 03:03:10.741621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.819 [2024-05-14 03:03:10.741632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.819 [2024-05-14 03:03:10.741645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.819 [2024-05-14 03:03:10.741656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.819 [2024-05-14 03:03:10.741670] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.819 [2024-05-14 03:03:10.741685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.819 [2024-05-14 03:03:10.741701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:24.819 [2024-05-14 03:03:10.741714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:24.819 [2024-05-14 03:03:10.741728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:24.819 [2024-05-14 03:03:10.741740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:24.819 [2024-05-14 03:03:10.741756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:24.819 [2024-05-14 03:03:10.741778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:24.819 [2024-05-14 03:03:10.741791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:24.819 [2024-05-14 03:03:10.741803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:24.819 [2024-05-14 03:03:10.741820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:24.819 [2024-05-14 03:03:10.741832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:24.819 [2024-05-14 03:03:10.741846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:24.819 [2024-05-14 03:03:10.741859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:24.819 [2024-05-14 03:03:10.741873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:24.819 [2024-05-14 03:03:10.741885] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.819 [2024-05-14 03:03:10.741913] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.819 [2024-05-14 03:03:10.741927] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.819 [2024-05-14 03:03:10.741942] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.819 [2024-05-14 03:03:10.741954] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.819 [2024-05-14 03:03:10.741969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.819 [2024-05-14 03:03:10.741983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.742000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.819 [2024-05-14 03:03:10.742012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:16:24.819 [2024-05-14 03:03:10.742025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.748305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.748360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.819 [2024-05-14 03:03:10.748380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:16:24.819 [2024-05-14 03:03:10.748394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.748534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.748557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.819 [2024-05-14 03:03:10.748571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:24.819 [2024-05-14 03:03:10.748596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.773218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.773299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.819 [2024-05-14 03:03:10.773326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.567 ms 00:16:24.819 [2024-05-14 03:03:10.773350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.773413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.773438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.819 [2024-05-14 03:03:10.773456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.819 [2024-05-14 03:03:10.773473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.773923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.773987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.819 [2024-05-14 03:03:10.774009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:16:24.819 [2024-05-14 03:03:10.774040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.774256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.774298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.819 [2024-05-14 03:03:10.774317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:16:24.819 [2024-05-14 03:03:10.774335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.781202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.781272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.819 [2024-05-14 03:03:10.781308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.833 ms 00:16:24.819 [2024-05-14 03:03:10.781326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.819 [2024-05-14 03:03:10.790607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:24.819 [2024-05-14 03:03:10.795555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.819 [2024-05-14 03:03:10.795609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.819 [2024-05-14 03:03:10.795630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:16:24.819 [2024-05-14 03:03:10.795642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.078 [2024-05-14 03:03:10.845532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.078 [2024-05-14 03:03:10.845638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:25.078 [2024-05-14 03:03:10.845679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.838 ms 00:16:25.078 [2024-05-14 03:03:10.845691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.078 [2024-05-14 03:03:10.845757] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:25.078 [2024-05-14 03:03:10.845778] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:26.981 [2024-05-14 03:03:12.924146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.924242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:26.981 [2024-05-14 03:03:12.924285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2078.395 ms 00:16:26.981 [2024-05-14 03:03:12.924299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.924571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.924598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:26.981 [2024-05-14 03:03:12.924630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:26.981 [2024-05-14 03:03:12.924643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.928655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.928709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:26.981 [2024-05-14 03:03:12.928764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.959 ms 00:16:26.981 [2024-05-14 03:03:12.928777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.932412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.932464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:26.981 [2024-05-14 03:03:12.932500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.587 ms 00:16:26.981 [2024-05-14 03:03:12.932511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.932759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.932796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:26.981 [2024-05-14 03:03:12.932815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:26.981 [2024-05-14 03:03:12.932828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.954338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.954402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:26.981 [2024-05-14 03:03:12.954440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.470 ms 00:16:26.981 [2024-05-14 03:03:12.954453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.958714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.958772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:26.981 [2024-05-14 03:03:12.958810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:16:26.981 [2024-05-14 03:03:12.958822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.960806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.960858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:26.981 [2024-05-14 03:03:12.960899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:16:26.981 [2024-05-14 03:03:12.960911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.965066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.965121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:26.981 [2024-05-14 03:03:12.965169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:16:26.981 [2024-05-14 03:03:12.965182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.965233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.965251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:26.981 [2024-05-14 03:03:12.965275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:26.981 [2024-05-14 03:03:12.965287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.965431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.981 [2024-05-14 03:03:12.965450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:26.981 [2024-05-14 03:03:12.965470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:26.981 [2024-05-14 03:03:12.965483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.981 [2024-05-14 03:03:12.966545] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2235.113 ms, result 0 00:16:26.981 { 00:16:26.981 "name": "ftl0", 00:16:26.981 "uuid": "178d789f-b3a1-433f-9662-f53bd4afbead" 00:16:26.981 } 00:16:26.981 03:03:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:26.981 03:03:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:26.981 03:03:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:27.240 03:03:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:27.500 [2024-05-14 03:03:13.374065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:27.500 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:27.500 Zero copy mechanism will not be used. 00:16:27.500 Running I/O for 4 seconds... 00:16:31.686 00:16:31.686 Latency(us) 00:16:31.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.686 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:31.686 ftl0 : 4.00 1646.87 109.36 0.00 0.00 635.20 247.62 1995.87 00:16:31.686 =================================================================================================================== 00:16:31.686 Total : 1646.87 109.36 0.00 0.00 635.20 247.62 1995.87 00:16:31.686 [2024-05-14 03:03:17.380848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:31.686 0 00:16:31.686 03:03:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:31.686 [2024-05-14 03:03:17.519902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.686 Running I/O for 4 seconds... 00:16:35.868 00:16:35.868 Latency(us) 00:16:35.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.868 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.868 ftl0 : 4.02 7467.00 29.17 0.00 0.00 17097.39 316.51 40274.85 00:16:35.868 =================================================================================================================== 00:16:35.868 Total : 7467.00 29.17 0.00 0.00 17097.39 0.00 40274.85 00:16:35.868 [2024-05-14 03:03:21.545164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.868 0 00:16:35.868 03:03:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:35.868 [2024-05-14 03:03:21.671012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:35.868 Running I/O for 4 seconds... 00:16:40.052 00:16:40.052 Latency(us) 00:16:40.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.052 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.052 Verification LBA range: start 0x0 length 0x1400000 00:16:40.052 ftl0 : 4.01 5397.17 21.08 0.00 0.00 23623.21 361.19 29312.47 00:16:40.052 =================================================================================================================== 00:16:40.052 Total : 5397.17 21.08 0.00 0.00 23623.21 0.00 29312.47 00:16:40.052 [2024-05-14 03:03:25.693619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:40.052 0 00:16:40.052 03:03:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:40.052 [2024-05-14 03:03:25.953992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.052 [2024-05-14 03:03:25.954064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.052 [2024-05-14 03:03:25.954123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:40.052 [2024-05-14 03:03:25.954134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.052 [2024-05-14 03:03:25.954216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:40.052 [2024-05-14 03:03:25.954677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.052 [2024-05-14 03:03:25.954715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.052 [2024-05-14 03:03:25.954729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:16:40.052 [2024-05-14 03:03:25.954742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.052 [2024-05-14 03:03:25.956607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.052 [2024-05-14 03:03:25.956713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.052 [2024-05-14 03:03:25.956736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:16:40.052 [2024-05-14 03:03:25.956750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.134370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.134460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.325 [2024-05-14 03:03:26.134481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.595 ms 00:16:40.325 [2024-05-14 03:03:26.134494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.140530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.140596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:40.325 [2024-05-14 03:03:26.140611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.984 ms 00:16:40.325 [2024-05-14 03:03:26.140626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.142237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.142325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.325 [2024-05-14 03:03:26.142356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:16:40.325 [2024-05-14 03:03:26.142368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.146649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.146724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.325 [2024-05-14 03:03:26.146741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:16:40.325 [2024-05-14 03:03:26.146753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.146875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.146899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.325 [2024-05-14 03:03:26.146911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:40.325 [2024-05-14 03:03:26.146941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.148751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.148840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:40.325 [2024-05-14 03:03:26.148855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:16:40.325 [2024-05-14 03:03:26.148869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.150626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.150695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:40.325 [2024-05-14 03:03:26.150710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:16:40.325 [2024-05-14 03:03:26.150721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.325 [2024-05-14 03:03:26.152186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.325 [2024-05-14 03:03:26.152286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.325 [2024-05-14 03:03:26.152302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:16:40.326 [2024-05-14 03:03:26.152314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.326 [2024-05-14 03:03:26.153517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.326 [2024-05-14 03:03:26.153619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.326 [2024-05-14 03:03:26.153634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:16:40.326 [2024-05-14 03:03:26.153645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.326 [2024-05-14 03:03:26.153681] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.326 [2024-05-14 03:03:26.153721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.153994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.326 [2024-05-14 03:03:26.154842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.154990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.327 [2024-05-14 03:03:26.155012] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.327 [2024-05-14 03:03:26.155024] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 178d789f-b3a1-433f-9662-f53bd4afbead 00:16:40.327 [2024-05-14 03:03:26.155037] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.327 [2024-05-14 03:03:26.155047] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.327 [2024-05-14 03:03:26.155060] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.327 [2024-05-14 03:03:26.155071] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.327 [2024-05-14 03:03:26.155084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.327 [2024-05-14 03:03:26.155094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.327 [2024-05-14 03:03:26.155116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.327 [2024-05-14 03:03:26.155126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.327 [2024-05-14 03:03:26.155151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.327 [2024-05-14 03:03:26.155163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.327 [2024-05-14 03:03:26.155179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.327 [2024-05-14 03:03:26.155190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:16:40.327 [2024-05-14 03:03:26.155204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.156542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.327 [2024-05-14 03:03:26.156593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.327 [2024-05-14 03:03:26.156608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:16:40.327 [2024-05-14 03:03:26.156620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.156691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.327 [2024-05-14 03:03:26.156711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.327 [2024-05-14 03:03:26.156723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:40.327 [2024-05-14 03:03:26.156735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.161706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.161760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.327 [2024-05-14 03:03:26.161774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.161786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.161845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.161862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.327 [2024-05-14 03:03:26.161874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.161900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.161982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.162038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.327 [2024-05-14 03:03:26.162051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.162064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.162090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.162107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.327 [2024-05-14 03:03:26.162118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.162129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.170061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.170157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.327 [2024-05-14 03:03:26.170176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.170189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.173850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.173925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.327 [2024-05-14 03:03:26.173941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.173956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.174060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.327 [2024-05-14 03:03:26.174082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.174094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.174189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.327 [2024-05-14 03:03:26.174221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.174245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.174371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.327 [2024-05-14 03:03:26.174385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.174398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.174487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.327 [2024-05-14 03:03:26.174503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.174527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.174602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.327 [2024-05-14 03:03:26.174621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.174634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.327 [2024-05-14 03:03:26.174714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.327 [2024-05-14 03:03:26.174730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.327 [2024-05-14 03:03:26.174742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.327 [2024-05-14 03:03:26.174884] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 220.853 ms, result 0 00:16:40.327 true 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 89727 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 89727 ']' 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 89727 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89727 00:16:40.327 killing process with pid 89727 00:16:40.327 Received shutdown signal, test time was about 4.000000 seconds 00:16:40.327 00:16:40.327 Latency(us) 00:16:40.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.327 =================================================================================================================== 00:16:40.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89727' 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 89727 00:16:40.327 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 89727 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:40.610 Remove shared memory files 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:40.610 00:16:40.610 real 0m20.454s 00:16:40.610 user 0m23.878s 00:16:40.610 sys 0m1.043s 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:40.610 ************************************ 00:16:40.610 END TEST ftl_bdevperf 00:16:40.610 ************************************ 00:16:40.610 03:03:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:40.867 03:03:26 ftl -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:40.867 03:03:26 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:40.867 03:03:26 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:40.867 03:03:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:40.867 ************************************ 00:16:40.867 START TEST ftl_trim 00:16:40.867 ************************************ 00:16:40.867 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:40.867 * Looking for test storage... 00:16:40.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=90067 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 90067 00:16:40.868 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 90067 ']' 00:16:40.868 03:03:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:40.868 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.868 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.868 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.868 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.868 03:03:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:40.868 [2024-05-14 03:03:26.857554] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:16:40.868 [2024-05-14 03:03:26.857727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90067 ] 00:16:41.126 [2024-05-14 03:03:26.998171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:41.126 [2024-05-14 03:03:27.014088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.126 [2024-05-14 03:03:27.051087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.126 [2024-05-14 03:03:27.051124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.126 [2024-05-14 03:03:27.051172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.059 03:03:27 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:42.059 03:03:27 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:16:42.059 03:03:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:42.059 03:03:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:42.059 03:03:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:42.059 03:03:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:42.059 03:03:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:42.059 03:03:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:42.318 03:03:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:42.318 03:03:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:42.318 03:03:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:42.318 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:42.318 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:42.318 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:42.318 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:42.318 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:42.575 { 00:16:42.575 "name": "nvme0n1", 00:16:42.575 "aliases": [ 00:16:42.575 "e5a9d84c-92b2-4833-a77a-8a86c339e0a7" 00:16:42.575 ], 00:16:42.575 "product_name": "NVMe disk", 00:16:42.575 "block_size": 4096, 00:16:42.575 "num_blocks": 1310720, 00:16:42.575 "uuid": "e5a9d84c-92b2-4833-a77a-8a86c339e0a7", 00:16:42.575 "assigned_rate_limits": { 00:16:42.575 "rw_ios_per_sec": 0, 00:16:42.575 "rw_mbytes_per_sec": 0, 00:16:42.575 "r_mbytes_per_sec": 0, 00:16:42.575 "w_mbytes_per_sec": 0 00:16:42.575 }, 00:16:42.575 "claimed": true, 00:16:42.575 "claim_type": "read_many_write_one", 00:16:42.575 "zoned": false, 00:16:42.575 "supported_io_types": { 00:16:42.575 "read": true, 00:16:42.575 "write": true, 00:16:42.575 "unmap": true, 00:16:42.575 "write_zeroes": true, 00:16:42.575 "flush": true, 00:16:42.575 "reset": true, 00:16:42.575 "compare": true, 00:16:42.575 "compare_and_write": false, 00:16:42.575 "abort": true, 00:16:42.575 "nvme_admin": true, 00:16:42.575 "nvme_io": true 00:16:42.575 }, 00:16:42.575 "driver_specific": { 00:16:42.575 "nvme": [ 00:16:42.575 { 00:16:42.575 "pci_address": "0000:00:11.0", 00:16:42.575 "trid": { 00:16:42.575 "trtype": "PCIe", 00:16:42.575 "traddr": "0000:00:11.0" 00:16:42.575 }, 00:16:42.575 "ctrlr_data": { 00:16:42.575 "cntlid": 0, 00:16:42.575 "vendor_id": "0x1b36", 00:16:42.575 "model_number": "QEMU NVMe Ctrl", 00:16:42.575 "serial_number": "12341", 00:16:42.575 "firmware_revision": "8.0.0", 00:16:42.575 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:42.575 "oacs": { 00:16:42.575 "security": 0, 00:16:42.575 "format": 1, 00:16:42.575 "firmware": 0, 00:16:42.575 "ns_manage": 1 00:16:42.575 }, 00:16:42.575 "multi_ctrlr": false, 00:16:42.575 "ana_reporting": false 00:16:42.575 }, 00:16:42.575 "vs": { 00:16:42.575 "nvme_version": "1.4" 00:16:42.575 }, 00:16:42.575 "ns_data": { 00:16:42.575 "id": 1, 00:16:42.575 "can_share": false 00:16:42.575 } 00:16:42.575 } 00:16:42.575 ], 00:16:42.575 "mp_policy": "active_passive" 00:16:42.575 } 00:16:42.575 } 00:16:42.575 ]' 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:42.575 03:03:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:16:42.575 03:03:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:42.575 03:03:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:42.575 03:03:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:42.575 03:03:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:42.575 03:03:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:42.833 03:03:28 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b2851337-5c0b-4fbb-bc7e-eff1521bae32 00:16:42.833 03:03:28 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:42.833 03:03:28 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2851337-5c0b-4fbb-bc7e-eff1521bae32 00:16:43.091 03:03:29 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:43.348 03:03:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=57d2a042-85c4-47c9-95e1-7d47d9ed9144 00:16:43.348 03:03:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 57d2a042-85c4-47c9-95e1-7d47d9ed9144 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=9add98b5-127e-429a-ac86-fa3be21a6007 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=9add98b5-127e-429a-ac86-fa3be21a6007 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:43.607 03:03:29 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:43.607 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=9add98b5-127e-429a-ac86-fa3be21a6007 00:16:43.607 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:43.607 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:43.607 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:43.607 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:43.865 { 00:16:43.865 "name": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:43.865 "aliases": [ 00:16:43.865 "lvs/nvme0n1p0" 00:16:43.865 ], 00:16:43.865 "product_name": "Logical Volume", 00:16:43.865 "block_size": 4096, 00:16:43.865 "num_blocks": 26476544, 00:16:43.865 "uuid": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:43.865 "assigned_rate_limits": { 00:16:43.865 "rw_ios_per_sec": 0, 00:16:43.865 "rw_mbytes_per_sec": 0, 00:16:43.865 "r_mbytes_per_sec": 0, 00:16:43.865 "w_mbytes_per_sec": 0 00:16:43.865 }, 00:16:43.865 "claimed": false, 00:16:43.865 "zoned": false, 00:16:43.865 "supported_io_types": { 00:16:43.865 "read": true, 00:16:43.865 "write": true, 00:16:43.865 "unmap": true, 00:16:43.865 "write_zeroes": true, 00:16:43.865 "flush": false, 00:16:43.865 "reset": true, 00:16:43.865 "compare": false, 00:16:43.865 "compare_and_write": false, 00:16:43.865 "abort": false, 00:16:43.865 "nvme_admin": false, 00:16:43.865 "nvme_io": false 00:16:43.865 }, 00:16:43.865 "driver_specific": { 00:16:43.865 "lvol": { 00:16:43.865 "lvol_store_uuid": "57d2a042-85c4-47c9-95e1-7d47d9ed9144", 00:16:43.865 "base_bdev": "nvme0n1", 00:16:43.865 "thin_provision": true, 00:16:43.865 "num_allocated_clusters": 0, 00:16:43.865 "snapshot": false, 00:16:43.865 "clone": false, 00:16:43.865 "esnap_clone": false 00:16:43.865 } 00:16:43.865 } 00:16:43.865 } 00:16:43.865 ]' 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:43.865 03:03:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:43.865 03:03:29 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:43.865 03:03:29 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:43.865 03:03:29 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:44.122 03:03:30 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:44.122 03:03:30 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:44.122 03:03:30 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:44.122 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=9add98b5-127e-429a-ac86-fa3be21a6007 00:16:44.122 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:44.122 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:44.122 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:44.123 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:44.379 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:44.379 { 00:16:44.379 "name": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:44.379 "aliases": [ 00:16:44.379 "lvs/nvme0n1p0" 00:16:44.379 ], 00:16:44.379 "product_name": "Logical Volume", 00:16:44.379 "block_size": 4096, 00:16:44.379 "num_blocks": 26476544, 00:16:44.379 "uuid": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:44.379 "assigned_rate_limits": { 00:16:44.380 "rw_ios_per_sec": 0, 00:16:44.380 "rw_mbytes_per_sec": 0, 00:16:44.380 "r_mbytes_per_sec": 0, 00:16:44.380 "w_mbytes_per_sec": 0 00:16:44.380 }, 00:16:44.380 "claimed": false, 00:16:44.380 "zoned": false, 00:16:44.380 "supported_io_types": { 00:16:44.380 "read": true, 00:16:44.380 "write": true, 00:16:44.380 "unmap": true, 00:16:44.380 "write_zeroes": true, 00:16:44.380 "flush": false, 00:16:44.380 "reset": true, 00:16:44.380 "compare": false, 00:16:44.380 "compare_and_write": false, 00:16:44.380 "abort": false, 00:16:44.380 "nvme_admin": false, 00:16:44.380 "nvme_io": false 00:16:44.380 }, 00:16:44.380 "driver_specific": { 00:16:44.380 "lvol": { 00:16:44.380 "lvol_store_uuid": "57d2a042-85c4-47c9-95e1-7d47d9ed9144", 00:16:44.380 "base_bdev": "nvme0n1", 00:16:44.380 "thin_provision": true, 00:16:44.380 "num_allocated_clusters": 0, 00:16:44.380 "snapshot": false, 00:16:44.380 "clone": false, 00:16:44.380 "esnap_clone": false 00:16:44.380 } 00:16:44.380 } 00:16:44.380 } 00:16:44.380 ]' 00:16:44.380 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:44.637 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:44.637 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:44.637 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:44.637 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:44.637 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:44.637 03:03:30 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:44.637 03:03:30 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:44.896 03:03:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:44.896 03:03:30 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:44.896 03:03:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=9add98b5-127e-429a-ac86-fa3be21a6007 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9add98b5-127e-429a-ac86-fa3be21a6007 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:44.896 { 00:16:44.896 "name": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:44.896 "aliases": [ 00:16:44.896 "lvs/nvme0n1p0" 00:16:44.896 ], 00:16:44.896 "product_name": "Logical Volume", 00:16:44.896 "block_size": 4096, 00:16:44.896 "num_blocks": 26476544, 00:16:44.896 "uuid": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:44.896 "assigned_rate_limits": { 00:16:44.896 "rw_ios_per_sec": 0, 00:16:44.896 "rw_mbytes_per_sec": 0, 00:16:44.896 "r_mbytes_per_sec": 0, 00:16:44.896 "w_mbytes_per_sec": 0 00:16:44.896 }, 00:16:44.896 "claimed": false, 00:16:44.896 "zoned": false, 00:16:44.896 "supported_io_types": { 00:16:44.896 "read": true, 00:16:44.896 "write": true, 00:16:44.896 "unmap": true, 00:16:44.896 "write_zeroes": true, 00:16:44.896 "flush": false, 00:16:44.896 "reset": true, 00:16:44.896 "compare": false, 00:16:44.896 "compare_and_write": false, 00:16:44.896 "abort": false, 00:16:44.896 "nvme_admin": false, 00:16:44.896 "nvme_io": false 00:16:44.896 }, 00:16:44.896 "driver_specific": { 00:16:44.896 "lvol": { 00:16:44.896 "lvol_store_uuid": "57d2a042-85c4-47c9-95e1-7d47d9ed9144", 00:16:44.896 "base_bdev": "nvme0n1", 00:16:44.896 "thin_provision": true, 00:16:44.896 "num_allocated_clusters": 0, 00:16:44.896 "snapshot": false, 00:16:44.896 "clone": false, 00:16:44.896 "esnap_clone": false 00:16:44.896 } 00:16:44.896 } 00:16:44.896 } 00:16:44.896 ]' 00:16:44.896 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:45.153 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:45.153 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:45.153 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:45.153 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:45.153 03:03:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:45.153 03:03:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:45.153 03:03:30 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9add98b5-127e-429a-ac86-fa3be21a6007 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:45.412 [2024-05-14 03:03:31.221335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.221394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.412 [2024-05-14 03:03:31.221415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:45.412 [2024-05-14 03:03:31.221430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.224560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.224607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.412 [2024-05-14 03:03:31.224625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:16:45.412 [2024-05-14 03:03:31.224642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.224787] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.412 [2024-05-14 03:03:31.225105] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.412 [2024-05-14 03:03:31.225155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.225177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.412 [2024-05-14 03:03:31.225191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:16:45.412 [2024-05-14 03:03:31.225205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.225494] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f1c59e5a-0c75-4386-944d-643450163ef9 00:16:45.412 [2024-05-14 03:03:31.226592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.226631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:45.412 [2024-05-14 03:03:31.226651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:45.412 [2024-05-14 03:03:31.226663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.231779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.231826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.412 [2024-05-14 03:03:31.231847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.004 ms 00:16:45.412 [2024-05-14 03:03:31.231859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.232028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.232051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.412 [2024-05-14 03:03:31.232086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:45.412 [2024-05-14 03:03:31.232098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.232161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.232195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.412 [2024-05-14 03:03:31.232212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:45.412 [2024-05-14 03:03:31.232224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.232282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.412 [2024-05-14 03:03:31.233886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.233945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.412 [2024-05-14 03:03:31.233973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:16:45.412 [2024-05-14 03:03:31.233987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.234045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.234063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.412 [2024-05-14 03:03:31.234076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:45.412 [2024-05-14 03:03:31.234092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.234145] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:45.412 [2024-05-14 03:03:31.234295] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:45.412 [2024-05-14 03:03:31.234322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.412 [2024-05-14 03:03:31.234340] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:45.412 [2024-05-14 03:03:31.234356] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.412 [2024-05-14 03:03:31.234374] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.412 [2024-05-14 03:03:31.234387] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.412 [2024-05-14 03:03:31.234417] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.412 [2024-05-14 03:03:31.234429] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:45.412 [2024-05-14 03:03:31.234442] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:45.412 [2024-05-14 03:03:31.234454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.234468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.412 [2024-05-14 03:03:31.234490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:16:45.412 [2024-05-14 03:03:31.234503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.234598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.412 [2024-05-14 03:03:31.234619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.412 [2024-05-14 03:03:31.234632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:45.412 [2024-05-14 03:03:31.234645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.412 [2024-05-14 03:03:31.234760] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.412 [2024-05-14 03:03:31.234806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.412 [2024-05-14 03:03:31.234833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.412 [2024-05-14 03:03:31.234852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.412 [2024-05-14 03:03:31.234880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.412 [2024-05-14 03:03:31.234894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.412 [2024-05-14 03:03:31.234905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.412 [2024-05-14 03:03:31.234917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.412 [2024-05-14 03:03:31.234928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.412 [2024-05-14 03:03:31.234941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.412 [2024-05-14 03:03:31.234951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.412 [2024-05-14 03:03:31.234964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.412 [2024-05-14 03:03:31.234974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.412 [2024-05-14 03:03:31.234989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.412 [2024-05-14 03:03:31.235000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:45.412 [2024-05-14 03:03:31.235012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.412 [2024-05-14 03:03:31.235022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.412 [2024-05-14 03:03:31.235034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:45.412 [2024-05-14 03:03:31.235045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.412 [2024-05-14 03:03:31.235057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:45.412 [2024-05-14 03:03:31.235068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:45.412 [2024-05-14 03:03:31.235080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:45.412 [2024-05-14 03:03:31.235091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.412 [2024-05-14 03:03:31.235103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.412 [2024-05-14 03:03:31.235114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.412 [2024-05-14 03:03:31.235126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.412 [2024-05-14 03:03:31.235153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:45.412 [2024-05-14 03:03:31.235170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.412 [2024-05-14 03:03:31.235181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.412 [2024-05-14 03:03:31.235196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.412 [2024-05-14 03:03:31.235206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.412 [2024-05-14 03:03:31.235218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.413 [2024-05-14 03:03:31.235229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:45.413 [2024-05-14 03:03:31.235264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.413 [2024-05-14 03:03:31.235276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.413 [2024-05-14 03:03:31.235289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.413 [2024-05-14 03:03:31.235300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.413 [2024-05-14 03:03:31.235312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.413 [2024-05-14 03:03:31.235328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:45.413 [2024-05-14 03:03:31.235341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.413 [2024-05-14 03:03:31.235351] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.413 [2024-05-14 03:03:31.235365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.413 [2024-05-14 03:03:31.235376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.413 [2024-05-14 03:03:31.235389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.413 [2024-05-14 03:03:31.235401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.413 [2024-05-14 03:03:31.235415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.413 [2024-05-14 03:03:31.235426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.413 [2024-05-14 03:03:31.235438] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.413 [2024-05-14 03:03:31.235448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.413 [2024-05-14 03:03:31.235460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.413 [2024-05-14 03:03:31.235473] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.413 [2024-05-14 03:03:31.235489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.413 [2024-05-14 03:03:31.235502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.413 [2024-05-14 03:03:31.235517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:45.413 [2024-05-14 03:03:31.235529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.413 [2024-05-14 03:03:31.235542] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:45.413 [2024-05-14 03:03:31.235554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:45.413 [2024-05-14 03:03:31.235567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:45.413 [2024-05-14 03:03:31.235583] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:45.413 [2024-05-14 03:03:31.235596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:45.413 [2024-05-14 03:03:31.235608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:45.413 [2024-05-14 03:03:31.235623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:45.413 [2024-05-14 03:03:31.235634] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:45.413 [2024-05-14 03:03:31.235648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:45.413 [2024-05-14 03:03:31.235659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:45.413 [2024-05-14 03:03:31.235672] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.413 [2024-05-14 03:03:31.235685] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.413 [2024-05-14 03:03:31.235700] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.413 [2024-05-14 03:03:31.235712] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.413 [2024-05-14 03:03:31.235725] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.413 [2024-05-14 03:03:31.235751] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.413 [2024-05-14 03:03:31.235769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.235782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.413 [2024-05-14 03:03:31.235798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:16:45.413 [2024-05-14 03:03:31.235809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.242297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.242356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.413 [2024-05-14 03:03:31.242379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.356 ms 00:16:45.413 [2024-05-14 03:03:31.242391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.242564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.242584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.413 [2024-05-14 03:03:31.242600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:45.413 [2024-05-14 03:03:31.242613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.252814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.252883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.413 [2024-05-14 03:03:31.252920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.154 ms 00:16:45.413 [2024-05-14 03:03:31.252931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.253043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.253128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.413 [2024-05-14 03:03:31.253161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.413 [2024-05-14 03:03:31.253176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.253535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.253570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.413 [2024-05-14 03:03:31.253590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:16:45.413 [2024-05-14 03:03:31.253602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.253748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.253776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.413 [2024-05-14 03:03:31.253793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:16:45.413 [2024-05-14 03:03:31.253807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.271139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.271227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.413 [2024-05-14 03:03:31.271277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.249 ms 00:16:45.413 [2024-05-14 03:03:31.271291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.283213] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.413 [2024-05-14 03:03:31.298248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.298364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.413 [2024-05-14 03:03:31.298385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.802 ms 00:16:45.413 [2024-05-14 03:03:31.298399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.361705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.413 [2024-05-14 03:03:31.361821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:45.413 [2024-05-14 03:03:31.361842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.175 ms 00:16:45.413 [2024-05-14 03:03:31.361856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.413 [2024-05-14 03:03:31.361925] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:45.413 [2024-05-14 03:03:31.361950] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:47.943 [2024-05-14 03:03:33.433623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.433736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:47.943 [2024-05-14 03:03:33.433757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2071.706 ms 00:16:47.943 [2024-05-14 03:03:33.433771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.434063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.434098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.943 [2024-05-14 03:03:33.434112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:16:47.943 [2024-05-14 03:03:33.434129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.438040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.438117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:47.943 [2024-05-14 03:03:33.438134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.858 ms 00:16:47.943 [2024-05-14 03:03:33.438160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.441707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.441780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:47.943 [2024-05-14 03:03:33.441797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.487 ms 00:16:47.943 [2024-05-14 03:03:33.441809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.442095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.442142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.943 [2024-05-14 03:03:33.442160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:16:47.943 [2024-05-14 03:03:33.442174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.465382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.465471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:47.943 [2024-05-14 03:03:33.465490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:16:47.943 [2024-05-14 03:03:33.465505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.470107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.470210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:47.943 [2024-05-14 03:03:33.470228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.544 ms 00:16:47.943 [2024-05-14 03:03:33.470245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.474507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.474568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:47.943 [2024-05-14 03:03:33.474600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:16:47.943 [2024-05-14 03:03:33.474630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.479071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.479157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.943 [2024-05-14 03:03:33.479176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.318 ms 00:16:47.943 [2024-05-14 03:03:33.479189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.479289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.479357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.943 [2024-05-14 03:03:33.479389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:47.943 [2024-05-14 03:03:33.479404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.479514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.943 [2024-05-14 03:03:33.479546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.943 [2024-05-14 03:03:33.479561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:47.943 [2024-05-14 03:03:33.479595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.943 [2024-05-14 03:03:33.480865] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:47.943 [2024-05-14 03:03:33.482217] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2259.160 ms, result 0 00:16:47.943 [2024-05-14 03:03:33.483055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.943 { 00:16:47.943 "name": "ftl0", 00:16:47.943 "uuid": "f1c59e5a-0c75-4386-944d-643450163ef9" 00:16:47.943 } 00:16:47.943 03:03:33 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:47.943 03:03:33 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:48.201 [ 00:16:48.201 { 00:16:48.201 "name": "ftl0", 00:16:48.201 "aliases": [ 00:16:48.201 "f1c59e5a-0c75-4386-944d-643450163ef9" 00:16:48.201 ], 00:16:48.201 "product_name": "FTL disk", 00:16:48.201 "block_size": 4096, 00:16:48.201 "num_blocks": 23592960, 00:16:48.201 "uuid": "f1c59e5a-0c75-4386-944d-643450163ef9", 00:16:48.201 "assigned_rate_limits": { 00:16:48.201 "rw_ios_per_sec": 0, 00:16:48.201 "rw_mbytes_per_sec": 0, 00:16:48.201 "r_mbytes_per_sec": 0, 00:16:48.201 "w_mbytes_per_sec": 0 00:16:48.201 }, 00:16:48.201 "claimed": false, 00:16:48.201 "zoned": false, 00:16:48.201 "supported_io_types": { 00:16:48.201 "read": true, 00:16:48.201 "write": true, 00:16:48.201 "unmap": true, 00:16:48.201 "write_zeroes": true, 00:16:48.201 "flush": true, 00:16:48.201 "reset": false, 00:16:48.201 "compare": false, 00:16:48.201 "compare_and_write": false, 00:16:48.201 "abort": false, 00:16:48.201 "nvme_admin": false, 00:16:48.201 "nvme_io": false 00:16:48.201 }, 00:16:48.201 "driver_specific": { 00:16:48.201 "ftl": { 00:16:48.201 "base_bdev": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:48.201 "cache": "nvc0n1p0" 00:16:48.201 } 00:16:48.201 } 00:16:48.201 } 00:16:48.201 ] 00:16:48.201 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:16:48.201 03:03:34 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:48.201 03:03:34 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:48.460 03:03:34 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:48.460 03:03:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:48.718 03:03:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:48.718 { 00:16:48.718 "name": "ftl0", 00:16:48.718 "aliases": [ 00:16:48.718 "f1c59e5a-0c75-4386-944d-643450163ef9" 00:16:48.718 ], 00:16:48.718 "product_name": "FTL disk", 00:16:48.718 "block_size": 4096, 00:16:48.718 "num_blocks": 23592960, 00:16:48.718 "uuid": "f1c59e5a-0c75-4386-944d-643450163ef9", 00:16:48.718 "assigned_rate_limits": { 00:16:48.718 "rw_ios_per_sec": 0, 00:16:48.718 "rw_mbytes_per_sec": 0, 00:16:48.718 "r_mbytes_per_sec": 0, 00:16:48.718 "w_mbytes_per_sec": 0 00:16:48.718 }, 00:16:48.718 "claimed": false, 00:16:48.718 "zoned": false, 00:16:48.718 "supported_io_types": { 00:16:48.718 "read": true, 00:16:48.718 "write": true, 00:16:48.718 "unmap": true, 00:16:48.718 "write_zeroes": true, 00:16:48.718 "flush": true, 00:16:48.718 "reset": false, 00:16:48.718 "compare": false, 00:16:48.718 "compare_and_write": false, 00:16:48.718 "abort": false, 00:16:48.718 "nvme_admin": false, 00:16:48.718 "nvme_io": false 00:16:48.718 }, 00:16:48.718 "driver_specific": { 00:16:48.718 "ftl": { 00:16:48.718 "base_bdev": "9add98b5-127e-429a-ac86-fa3be21a6007", 00:16:48.718 "cache": "nvc0n1p0" 00:16:48.718 } 00:16:48.718 } 00:16:48.718 } 00:16:48.718 ]' 00:16:48.718 03:03:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:48.718 03:03:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:48.718 03:03:34 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:48.977 [2024-05-14 03:03:34.864250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.864318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:48.977 [2024-05-14 03:03:34.864359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.977 [2024-05-14 03:03:34.864371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.864419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:48.977 [2024-05-14 03:03:34.864954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.864998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:48.977 [2024-05-14 03:03:34.865013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:16:48.977 [2024-05-14 03:03:34.865026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.865690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.865735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:48.977 [2024-05-14 03:03:34.865749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:16:48.977 [2024-05-14 03:03:34.865764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.869410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.869443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:48.977 [2024-05-14 03:03:34.869458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.611 ms 00:16:48.977 [2024-05-14 03:03:34.869470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.876588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.876628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:48.977 [2024-05-14 03:03:34.876643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.047 ms 00:16:48.977 [2024-05-14 03:03:34.876672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.878134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.878206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:48.977 [2024-05-14 03:03:34.878224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:16:48.977 [2024-05-14 03:03:34.878237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.882360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.882407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:48.977 [2024-05-14 03:03:34.882437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:16:48.977 [2024-05-14 03:03:34.882453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.882653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.882682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:48.977 [2024-05-14 03:03:34.882712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:48.977 [2024-05-14 03:03:34.882727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.884751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.884809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:48.977 [2024-05-14 03:03:34.884823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:16:48.977 [2024-05-14 03:03:34.884834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.886296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.886365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:48.977 [2024-05-14 03:03:34.886379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:16:48.977 [2024-05-14 03:03:34.886390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.977 [2024-05-14 03:03:34.887695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.977 [2024-05-14 03:03:34.887774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:48.978 [2024-05-14 03:03:34.887792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:16:48.978 [2024-05-14 03:03:34.887807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.978 [2024-05-14 03:03:34.889005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.978 [2024-05-14 03:03:34.889060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:48.978 [2024-05-14 03:03:34.889074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:16:48.978 [2024-05-14 03:03:34.889086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.978 [2024-05-14 03:03:34.889219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:48.978 [2024-05-14 03:03:34.889249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.889993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:48.978 [2024-05-14 03:03:34.890357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:48.979 [2024-05-14 03:03:34.890548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:48.979 [2024-05-14 03:03:34.890560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:16:48.979 [2024-05-14 03:03:34.890573] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:48.979 [2024-05-14 03:03:34.890599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:48.979 [2024-05-14 03:03:34.890632] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:48.979 [2024-05-14 03:03:34.890644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:48.979 [2024-05-14 03:03:34.890659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:48.979 [2024-05-14 03:03:34.890684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:48.979 [2024-05-14 03:03:34.890696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:48.979 [2024-05-14 03:03:34.890705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:48.979 [2024-05-14 03:03:34.890716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:48.979 [2024-05-14 03:03:34.890727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.979 [2024-05-14 03:03:34.890740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:48.979 [2024-05-14 03:03:34.890752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:16:48.979 [2024-05-14 03:03:34.890763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.892423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.979 [2024-05-14 03:03:34.892454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:48.979 [2024-05-14 03:03:34.892470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:16:48.979 [2024-05-14 03:03:34.892483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.892569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.979 [2024-05-14 03:03:34.892588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:48.979 [2024-05-14 03:03:34.892600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:48.979 [2024-05-14 03:03:34.892615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.898071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.898132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.979 [2024-05-14 03:03:34.898192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.898209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.898373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.898396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.979 [2024-05-14 03:03:34.898409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.898422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.898529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.898574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.979 [2024-05-14 03:03:34.898588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.898600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.898638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.898654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.979 [2024-05-14 03:03:34.898666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.898681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.908117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.908229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.979 [2024-05-14 03:03:34.908250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.908265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.912100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.912172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.979 [2024-05-14 03:03:34.912223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.912251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.912316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.912334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.979 [2024-05-14 03:03:34.912349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.912361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.912461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.912488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.979 [2024-05-14 03:03:34.912502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.912517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.912640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.912668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.979 [2024-05-14 03:03:34.912696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.912713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.912792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.912818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:48.979 [2024-05-14 03:03:34.912831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.912844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.912905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.912924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.979 [2024-05-14 03:03:34.912936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.912952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.913016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.979 [2024-05-14 03:03:34.913041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.979 [2024-05-14 03:03:34.913054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.979 [2024-05-14 03:03:34.913069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.979 [2024-05-14 03:03:34.913287] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.053 ms, result 0 00:16:48.979 true 00:16:48.979 03:03:34 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 90067 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90067 ']' 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90067 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90067 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90067' 00:16:48.979 killing process with pid 90067 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 90067 00:16:48.979 03:03:34 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 90067 00:16:52.265 03:03:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:53.201 65536+0 records in 00:16:53.201 65536+0 records out 00:16:53.201 268435456 bytes (268 MB, 256 MiB) copied, 1.07293 s, 250 MB/s 00:16:53.201 03:03:38 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:53.201 [2024-05-14 03:03:39.040375] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:16:53.201 [2024-05-14 03:03:39.040568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90238 ] 00:16:53.201 [2024-05-14 03:03:39.188262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.201 [2024-05-14 03:03:39.207941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.463 [2024-05-14 03:03:39.246034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.463 [2024-05-14 03:03:39.331870] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:53.463 [2024-05-14 03:03:39.331971] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:53.463 [2024-05-14 03:03:39.482269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.463 [2024-05-14 03:03:39.482392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:53.463 [2024-05-14 03:03:39.482429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:53.463 [2024-05-14 03:03:39.482453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.463 [2024-05-14 03:03:39.485476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.463 [2024-05-14 03:03:39.485522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.463 [2024-05-14 03:03:39.485541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:16:53.463 [2024-05-14 03:03:39.485553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.463 [2024-05-14 03:03:39.485895] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:53.463 [2024-05-14 03:03:39.486289] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:53.463 [2024-05-14 03:03:39.486334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.463 [2024-05-14 03:03:39.486354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.463 [2024-05-14 03:03:39.486378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:16:53.463 [2024-05-14 03:03:39.486395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.487769] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:53.728 [2024-05-14 03:03:39.490021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.490065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:53.728 [2024-05-14 03:03:39.490082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:16:53.728 [2024-05-14 03:03:39.490094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.490233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.490256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:53.728 [2024-05-14 03:03:39.490272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:53.728 [2024-05-14 03:03:39.490294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.495244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.495309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.728 [2024-05-14 03:03:39.495361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.858 ms 00:16:53.728 [2024-05-14 03:03:39.495372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.495507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.495529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.728 [2024-05-14 03:03:39.495542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:53.728 [2024-05-14 03:03:39.495557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.495601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.495618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:53.728 [2024-05-14 03:03:39.495630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:53.728 [2024-05-14 03:03:39.495640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.495671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:53.728 [2024-05-14 03:03:39.497158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.497233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.728 [2024-05-14 03:03:39.497265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:16:53.728 [2024-05-14 03:03:39.497281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.497341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.497358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:53.728 [2024-05-14 03:03:39.497370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:53.728 [2024-05-14 03:03:39.497397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.497425] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:53.728 [2024-05-14 03:03:39.497455] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:53.728 [2024-05-14 03:03:39.497506] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:53.728 [2024-05-14 03:03:39.497540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:53.728 [2024-05-14 03:03:39.497625] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:53.728 [2024-05-14 03:03:39.497658] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:53.728 [2024-05-14 03:03:39.497675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:53.728 [2024-05-14 03:03:39.497690] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:53.728 [2024-05-14 03:03:39.497704] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:53.728 [2024-05-14 03:03:39.497743] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:53.728 [2024-05-14 03:03:39.497754] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:53.728 [2024-05-14 03:03:39.497773] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:53.728 [2024-05-14 03:03:39.497787] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:53.728 [2024-05-14 03:03:39.497807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.497820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:53.728 [2024-05-14 03:03:39.497831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:16:53.728 [2024-05-14 03:03:39.497842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.497934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.728 [2024-05-14 03:03:39.497960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:53.728 [2024-05-14 03:03:39.497974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:53.728 [2024-05-14 03:03:39.497985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.728 [2024-05-14 03:03:39.498079] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:53.728 [2024-05-14 03:03:39.498097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:53.728 [2024-05-14 03:03:39.498110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:53.728 [2024-05-14 03:03:39.498121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.728 [2024-05-14 03:03:39.498147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:53.728 [2024-05-14 03:03:39.498160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:53.728 [2024-05-14 03:03:39.498170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:53.728 [2024-05-14 03:03:39.498181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:53.729 [2024-05-14 03:03:39.498191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:53.729 [2024-05-14 03:03:39.498211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:53.729 [2024-05-14 03:03:39.498225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:53.729 [2024-05-14 03:03:39.498236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:53.729 [2024-05-14 03:03:39.498260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:53.729 [2024-05-14 03:03:39.498271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:53.729 [2024-05-14 03:03:39.498281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:53.729 [2024-05-14 03:03:39.498301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:53.729 [2024-05-14 03:03:39.498313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:53.729 [2024-05-14 03:03:39.498349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:53.729 [2024-05-14 03:03:39.498360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:53.729 [2024-05-14 03:03:39.498380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:53.729 [2024-05-14 03:03:39.498410] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:53.729 [2024-05-14 03:03:39.498448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498468] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:53.729 [2024-05-14 03:03:39.498478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:53.729 [2024-05-14 03:03:39.498508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:53.729 [2024-05-14 03:03:39.498529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:53.729 [2024-05-14 03:03:39.498539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:53.729 [2024-05-14 03:03:39.498549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:53.729 [2024-05-14 03:03:39.498558] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:53.729 [2024-05-14 03:03:39.498570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:53.729 [2024-05-14 03:03:39.498581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.729 [2024-05-14 03:03:39.498606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:53.729 [2024-05-14 03:03:39.498617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:53.729 [2024-05-14 03:03:39.498628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:53.729 [2024-05-14 03:03:39.498638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:53.729 [2024-05-14 03:03:39.498649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:53.729 [2024-05-14 03:03:39.498659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:53.729 [2024-05-14 03:03:39.498672] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:53.729 [2024-05-14 03:03:39.498700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:53.729 [2024-05-14 03:03:39.498712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:53.729 [2024-05-14 03:03:39.498723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:53.729 [2024-05-14 03:03:39.498734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:53.729 [2024-05-14 03:03:39.498745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:53.729 [2024-05-14 03:03:39.498756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:53.729 [2024-05-14 03:03:39.498766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:53.729 [2024-05-14 03:03:39.498777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:53.729 [2024-05-14 03:03:39.498791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:53.729 [2024-05-14 03:03:39.498802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:53.729 [2024-05-14 03:03:39.498813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:53.729 [2024-05-14 03:03:39.498824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:53.729 [2024-05-14 03:03:39.498835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:53.729 [2024-05-14 03:03:39.498847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:53.729 [2024-05-14 03:03:39.498858] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:53.729 [2024-05-14 03:03:39.498873] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:53.729 [2024-05-14 03:03:39.498885] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:53.729 [2024-05-14 03:03:39.498896] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:53.729 [2024-05-14 03:03:39.498907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:53.729 [2024-05-14 03:03:39.498918] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:53.729 [2024-05-14 03:03:39.498930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.498942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:53.729 [2024-05-14 03:03:39.498953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:16:53.729 [2024-05-14 03:03:39.498963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.505494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.505554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.729 [2024-05-14 03:03:39.505600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.473 ms 00:16:53.729 [2024-05-14 03:03:39.505612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.505770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.505812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:53.729 [2024-05-14 03:03:39.505827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:53.729 [2024-05-14 03:03:39.505842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.523386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.523469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.729 [2024-05-14 03:03:39.523504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.509 ms 00:16:53.729 [2024-05-14 03:03:39.523516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.523626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.523647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.729 [2024-05-14 03:03:39.523659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:53.729 [2024-05-14 03:03:39.523717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.524106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.524189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.729 [2024-05-14 03:03:39.524215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:16:53.729 [2024-05-14 03:03:39.524226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.524399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.524433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.729 [2024-05-14 03:03:39.524450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:16:53.729 [2024-05-14 03:03:39.524461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.530391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.530446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.729 [2024-05-14 03:03:39.530490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.899 ms 00:16:53.729 [2024-05-14 03:03:39.530501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.533123] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:53.729 [2024-05-14 03:03:39.533206] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:53.729 [2024-05-14 03:03:39.533238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.533249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:53.729 [2024-05-14 03:03:39.533261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:16:53.729 [2024-05-14 03:03:39.533271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.729 [2024-05-14 03:03:39.548559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.729 [2024-05-14 03:03:39.548638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:53.729 [2024-05-14 03:03:39.548685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.218 ms 00:16:53.730 [2024-05-14 03:03:39.548707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.550768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.550841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:53.730 [2024-05-14 03:03:39.550856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:16:53.730 [2024-05-14 03:03:39.550866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.552614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.552680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:53.730 [2024-05-14 03:03:39.552709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:16:53.730 [2024-05-14 03:03:39.552719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.553000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.553035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:53.730 [2024-05-14 03:03:39.553049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:16:53.730 [2024-05-14 03:03:39.553060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.571606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.571684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:53.730 [2024-05-14 03:03:39.571730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.503 ms 00:16:53.730 [2024-05-14 03:03:39.571741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.579149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:53.730 [2024-05-14 03:03:39.591883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.591951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:53.730 [2024-05-14 03:03:39.591991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.982 ms 00:16:53.730 [2024-05-14 03:03:39.592002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.592131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.592149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:53.730 [2024-05-14 03:03:39.592181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:53.730 [2024-05-14 03:03:39.592231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.592355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.592372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:53.730 [2024-05-14 03:03:39.592390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:53.730 [2024-05-14 03:03:39.592401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.594718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.594796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:53.730 [2024-05-14 03:03:39.594826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.288 ms 00:16:53.730 [2024-05-14 03:03:39.594836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.594884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.594902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:53.730 [2024-05-14 03:03:39.594913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:53.730 [2024-05-14 03:03:39.594923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.594960] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:53.730 [2024-05-14 03:03:39.594975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.594985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:53.730 [2024-05-14 03:03:39.595006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:53.730 [2024-05-14 03:03:39.595031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.598700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.598760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:53.730 [2024-05-14 03:03:39.598791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.624 ms 00:16:53.730 [2024-05-14 03:03:39.598801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.598885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.730 [2024-05-14 03:03:39.598902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:53.730 [2024-05-14 03:03:39.598914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:53.730 [2024-05-14 03:03:39.598923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.730 [2024-05-14 03:03:39.600006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.730 [2024-05-14 03:03:39.601355] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.317 ms, result 0 00:16:53.730 [2024-05-14 03:03:39.602295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.730 [2024-05-14 03:03:39.610230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.903  Copying: 22/256 [MB] (22 MBps) Copying: 45/256 [MB] (22 MBps) Copying: 67/256 [MB] (22 MBps) Copying: 90/256 [MB] (22 MBps) Copying: 113/256 [MB] (23 MBps) Copying: 135/256 [MB] (22 MBps) Copying: 158/256 [MB] (22 MBps) Copying: 182/256 [MB] (23 MBps) Copying: 205/256 [MB] (23 MBps) Copying: 228/256 [MB] (22 MBps) Copying: 250/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-05-14 03:03:50.836459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.903 [2024-05-14 03:03:50.837475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.903 [2024-05-14 03:03:50.837512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:04.903 [2024-05-14 03:03:50.837530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.903 [2024-05-14 03:03:50.837541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.903 [2024-05-14 03:03:50.837569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:04.903 [2024-05-14 03:03:50.837999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.903 [2024-05-14 03:03:50.838025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:04.903 [2024-05-14 03:03:50.838051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:04.903 [2024-05-14 03:03:50.838066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.903 [2024-05-14 03:03:50.839908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.903 [2024-05-14 03:03:50.839966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:04.903 [2024-05-14 03:03:50.839982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:17:04.903 [2024-05-14 03:03:50.839993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.903 [2024-05-14 03:03:50.846496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.903 [2024-05-14 03:03:50.846533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:04.903 [2024-05-14 03:03:50.846564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.480 ms 00:17:04.904 [2024-05-14 03:03:50.846574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.853184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.853231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:04.904 [2024-05-14 03:03:50.853260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.545 ms 00:17:04.904 [2024-05-14 03:03:50.853282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.854722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.854804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:04.904 [2024-05-14 03:03:50.854835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:17:04.904 [2024-05-14 03:03:50.854845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.857877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.857959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:04.904 [2024-05-14 03:03:50.857988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:17:04.904 [2024-05-14 03:03:50.858005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.858119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.858148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:04.904 [2024-05-14 03:03:50.858174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:04.904 [2024-05-14 03:03:50.858184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.860096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.860170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:04.904 [2024-05-14 03:03:50.860200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:17:04.904 [2024-05-14 03:03:50.860224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.861808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.861871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:04.904 [2024-05-14 03:03:50.861884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:17:04.904 [2024-05-14 03:03:50.861893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.863216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.863293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:04.904 [2024-05-14 03:03:50.863322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:17:04.904 [2024-05-14 03:03:50.863332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.864480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.904 [2024-05-14 03:03:50.864530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:04.904 [2024-05-14 03:03:50.864573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:17:04.904 [2024-05-14 03:03:50.864583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.904 [2024-05-14 03:03:50.864619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:04.904 [2024-05-14 03:03:50.864640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.864990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:04.904 [2024-05-14 03:03:50.865246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:04.905 [2024-05-14 03:03:50.865777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:04.905 [2024-05-14 03:03:50.865788] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:17:04.905 [2024-05-14 03:03:50.865799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:04.905 [2024-05-14 03:03:50.865810] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:04.905 [2024-05-14 03:03:50.865826] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:04.905 [2024-05-14 03:03:50.865838] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:04.905 [2024-05-14 03:03:50.865847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:04.905 [2024-05-14 03:03:50.865858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:04.905 [2024-05-14 03:03:50.865868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:04.905 [2024-05-14 03:03:50.865888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:04.905 [2024-05-14 03:03:50.865898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:04.905 [2024-05-14 03:03:50.865908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.905 [2024-05-14 03:03:50.865919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:04.905 [2024-05-14 03:03:50.865931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:17:04.905 [2024-05-14 03:03:50.865945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.867318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.905 [2024-05-14 03:03:50.867363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:04.905 [2024-05-14 03:03:50.867376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:17:04.905 [2024-05-14 03:03:50.867386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.867459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.905 [2024-05-14 03:03:50.867474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:04.905 [2024-05-14 03:03:50.867486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:04.905 [2024-05-14 03:03:50.867496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.872448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.872499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.905 [2024-05-14 03:03:50.872512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.872532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.872587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.872601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.905 [2024-05-14 03:03:50.872612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.872622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.872680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.872697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.905 [2024-05-14 03:03:50.872708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.872734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.872773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.872785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.905 [2024-05-14 03:03:50.872796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.872806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.880584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.880648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.905 [2024-05-14 03:03:50.880680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.880690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.884406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.884443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.905 [2024-05-14 03:03:50.884473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.884484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.905 [2024-05-14 03:03:50.884512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.905 [2024-05-14 03:03:50.884525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.905 [2024-05-14 03:03:50.884543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.905 [2024-05-14 03:03:50.884553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.906 [2024-05-14 03:03:50.884599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.906 [2024-05-14 03:03:50.884621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.906 [2024-05-14 03:03:50.884631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.906 [2024-05-14 03:03:50.884657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.906 [2024-05-14 03:03:50.884774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.906 [2024-05-14 03:03:50.884792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.906 [2024-05-14 03:03:50.884816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.906 [2024-05-14 03:03:50.884827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.906 [2024-05-14 03:03:50.884874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.906 [2024-05-14 03:03:50.884891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.906 [2024-05-14 03:03:50.884903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.906 [2024-05-14 03:03:50.884914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.906 [2024-05-14 03:03:50.884960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.906 [2024-05-14 03:03:50.884984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.906 [2024-05-14 03:03:50.884996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.906 [2024-05-14 03:03:50.885010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.906 [2024-05-14 03:03:50.885068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.906 [2024-05-14 03:03:50.885099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.906 [2024-05-14 03:03:50.885111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.906 [2024-05-14 03:03:50.885122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.906 [2024-05-14 03:03:50.885322] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.807 ms, result 0 00:17:05.163 00:17:05.163 00:17:05.163 03:03:51 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=90364 00:17:05.163 03:03:51 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:05.163 03:03:51 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 90364 00:17:05.163 03:03:51 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 90364 ']' 00:17:05.163 03:03:51 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.163 03:03:51 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.163 03:03:51 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.163 03:03:51 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.163 03:03:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:05.421 [2024-05-14 03:03:51.285196] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:05.421 [2024-05-14 03:03:51.285373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90364 ] 00:17:05.421 [2024-05-14 03:03:51.431664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.678 [2024-05-14 03:03:51.451360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.678 [2024-05-14 03:03:51.485659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.242 03:03:52 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.242 03:03:52 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:06.242 03:03:52 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:06.503 [2024-05-14 03:03:52.456391] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.503 [2024-05-14 03:03:52.456491] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.763 [2024-05-14 03:03:52.620437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.620524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:06.763 [2024-05-14 03:03:52.620543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:06.763 [2024-05-14 03:03:52.620555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.623060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.623162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.763 [2024-05-14 03:03:52.623182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:17:06.763 [2024-05-14 03:03:52.623197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.623295] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:06.763 [2024-05-14 03:03:52.623601] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:06.763 [2024-05-14 03:03:52.623629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.623644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.763 [2024-05-14 03:03:52.623658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:17:06.763 [2024-05-14 03:03:52.623670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.625044] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:06.763 [2024-05-14 03:03:52.627374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.627417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:06.763 [2024-05-14 03:03:52.627453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:17:06.763 [2024-05-14 03:03:52.627466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.627537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.627574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:06.763 [2024-05-14 03:03:52.627592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:06.763 [2024-05-14 03:03:52.627626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.631980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.632037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.763 [2024-05-14 03:03:52.632074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:17:06.763 [2024-05-14 03:03:52.632096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.632283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.632305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.763 [2024-05-14 03:03:52.632320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:06.763 [2024-05-14 03:03:52.632349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.632399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.632414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:06.763 [2024-05-14 03:03:52.632428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:06.763 [2024-05-14 03:03:52.632439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.632489] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:06.763 [2024-05-14 03:03:52.633805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.633873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.763 [2024-05-14 03:03:52.633888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:17:06.763 [2024-05-14 03:03:52.633901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.633944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.633963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:06.763 [2024-05-14 03:03:52.633975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:06.763 [2024-05-14 03:03:52.633987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.634029] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:06.763 [2024-05-14 03:03:52.634067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:06.763 [2024-05-14 03:03:52.634134] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:06.763 [2024-05-14 03:03:52.634174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:06.763 [2024-05-14 03:03:52.634287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:06.763 [2024-05-14 03:03:52.634321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:06.763 [2024-05-14 03:03:52.634336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:06.763 [2024-05-14 03:03:52.634354] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:06.763 [2024-05-14 03:03:52.634378] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:06.763 [2024-05-14 03:03:52.634396] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:06.763 [2024-05-14 03:03:52.634408] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:06.763 [2024-05-14 03:03:52.634421] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:06.763 [2024-05-14 03:03:52.634434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:06.763 [2024-05-14 03:03:52.634449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.634463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:06.763 [2024-05-14 03:03:52.634477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:17:06.763 [2024-05-14 03:03:52.634488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.634581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.763 [2024-05-14 03:03:52.634607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:06.763 [2024-05-14 03:03:52.634627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:06.763 [2024-05-14 03:03:52.634639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.763 [2024-05-14 03:03:52.634757] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:06.763 [2024-05-14 03:03:52.634792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:06.763 [2024-05-14 03:03:52.634840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.763 [2024-05-14 03:03:52.634852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.763 [2024-05-14 03:03:52.634882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:06.763 [2024-05-14 03:03:52.634892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:06.763 [2024-05-14 03:03:52.634904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:06.763 [2024-05-14 03:03:52.634914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:06.763 [2024-05-14 03:03:52.634926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:06.763 [2024-05-14 03:03:52.634935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.763 [2024-05-14 03:03:52.634947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:06.763 [2024-05-14 03:03:52.634957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:06.763 [2024-05-14 03:03:52.634968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.763 [2024-05-14 03:03:52.634978] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:06.763 [2024-05-14 03:03:52.634990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:06.763 [2024-05-14 03:03:52.635000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.763 [2024-05-14 03:03:52.635027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:06.763 [2024-05-14 03:03:52.635055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:06.763 [2024-05-14 03:03:52.635083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:06.764 [2024-05-14 03:03:52.635109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:06.764 [2024-05-14 03:03:52.635120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:06.764 [2024-05-14 03:03:52.635158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:06.764 [2024-05-14 03:03:52.635194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:06.764 [2024-05-14 03:03:52.635228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:06.764 [2024-05-14 03:03:52.635264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:06.764 [2024-05-14 03:03:52.635324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.764 [2024-05-14 03:03:52.635353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:06.764 [2024-05-14 03:03:52.635366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:06.764 [2024-05-14 03:03:52.635377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.764 [2024-05-14 03:03:52.635389] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:06.764 [2024-05-14 03:03:52.635401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:06.764 [2024-05-14 03:03:52.635414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.764 [2024-05-14 03:03:52.635439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:06.764 [2024-05-14 03:03:52.635450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:06.764 [2024-05-14 03:03:52.635463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:06.764 [2024-05-14 03:03:52.635474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:06.764 [2024-05-14 03:03:52.635488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:06.764 [2024-05-14 03:03:52.635501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:06.764 [2024-05-14 03:03:52.635516] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:06.764 [2024-05-14 03:03:52.635531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.764 [2024-05-14 03:03:52.635548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:06.764 [2024-05-14 03:03:52.635560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:06.764 [2024-05-14 03:03:52.635573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:06.764 [2024-05-14 03:03:52.635585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:06.764 [2024-05-14 03:03:52.635599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:06.764 [2024-05-14 03:03:52.635625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:06.764 [2024-05-14 03:03:52.635638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:06.764 [2024-05-14 03:03:52.635664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:06.764 [2024-05-14 03:03:52.635677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:06.764 [2024-05-14 03:03:52.635704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:06.764 [2024-05-14 03:03:52.635716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:06.764 [2024-05-14 03:03:52.635742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:06.764 [2024-05-14 03:03:52.635754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:06.764 [2024-05-14 03:03:52.635765] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:06.764 [2024-05-14 03:03:52.635802] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.764 [2024-05-14 03:03:52.635834] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:06.764 [2024-05-14 03:03:52.635865] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:06.764 [2024-05-14 03:03:52.635894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:06.764 [2024-05-14 03:03:52.635907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:06.764 [2024-05-14 03:03:52.635921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.635939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:06.764 [2024-05-14 03:03:52.635951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:17:06.764 [2024-05-14 03:03:52.635965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.642938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.643014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.764 [2024-05-14 03:03:52.643033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.892 ms 00:17:06.764 [2024-05-14 03:03:52.643047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.643236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.643261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:06.764 [2024-05-14 03:03:52.643291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:06.764 [2024-05-14 03:03:52.643304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.652840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.652926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.764 [2024-05-14 03:03:52.652944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.507 ms 00:17:06.764 [2024-05-14 03:03:52.652956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.653097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.653123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.764 [2024-05-14 03:03:52.653136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:06.764 [2024-05-14 03:03:52.653148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.653488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.653519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.764 [2024-05-14 03:03:52.653534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:06.764 [2024-05-14 03:03:52.653547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.653697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.653719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.764 [2024-05-14 03:03:52.653736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:06.764 [2024-05-14 03:03:52.653748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.659038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.659118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.764 [2024-05-14 03:03:52.659134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.263 ms 00:17:06.764 [2024-05-14 03:03:52.659175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.661613] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:06.764 [2024-05-14 03:03:52.661656] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:06.764 [2024-05-14 03:03:52.661688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.661712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:06.764 [2024-05-14 03:03:52.661725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:17:06.764 [2024-05-14 03:03:52.661736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.675544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.675618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:06.764 [2024-05-14 03:03:52.675636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.682 ms 00:17:06.764 [2024-05-14 03:03:52.675649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.677671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.677740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:06.764 [2024-05-14 03:03:52.677755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:17:06.764 [2024-05-14 03:03:52.677769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.679411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.679481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:06.764 [2024-05-14 03:03:52.679496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:17:06.764 [2024-05-14 03:03:52.679508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.764 [2024-05-14 03:03:52.679771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.764 [2024-05-14 03:03:52.679839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:06.765 [2024-05-14 03:03:52.679862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:17:06.765 [2024-05-14 03:03:52.679876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.700796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.700889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:06.765 [2024-05-14 03:03:52.700917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.873 ms 00:17:06.765 [2024-05-14 03:03:52.700931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.709296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:06.765 [2024-05-14 03:03:52.722993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.723065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:06.765 [2024-05-14 03:03:52.723104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.943 ms 00:17:06.765 [2024-05-14 03:03:52.723129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.723307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.723329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:06.765 [2024-05-14 03:03:52.723354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:06.765 [2024-05-14 03:03:52.723368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.723462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.723479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:06.765 [2024-05-14 03:03:52.723504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:06.765 [2024-05-14 03:03:52.723516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.725778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.725828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:06.765 [2024-05-14 03:03:52.725862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:17:06.765 [2024-05-14 03:03:52.725873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.725912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.725935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:06.765 [2024-05-14 03:03:52.725954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:06.765 [2024-05-14 03:03:52.725965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.726007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:06.765 [2024-05-14 03:03:52.726023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.726036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:06.765 [2024-05-14 03:03:52.726063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:06.765 [2024-05-14 03:03:52.726076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.730055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.730118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:06.765 [2024-05-14 03:03:52.730161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:17:06.765 [2024-05-14 03:03:52.730177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.730254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.765 [2024-05-14 03:03:52.730277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:06.765 [2024-05-14 03:03:52.730289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:06.765 [2024-05-14 03:03:52.730300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.765 [2024-05-14 03:03:52.731419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.765 [2024-05-14 03:03:52.732832] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.580 ms, result 0 00:17:06.765 [2024-05-14 03:03:52.734353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.765 Some configs were skipped because the RPC state that can call them passed over. 00:17:06.765 03:03:52 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:07.027 [2024-05-14 03:03:52.981867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.027 [2024-05-14 03:03:52.981931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:07.027 [2024-05-14 03:03:52.981952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:17:07.027 [2024-05-14 03:03:52.981963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.027 [2024-05-14 03:03:52.982007] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.585 ms, result 0 00:17:07.027 true 00:17:07.027 03:03:53 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:07.289 [2024-05-14 03:03:53.201216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.289 [2024-05-14 03:03:53.201285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:07.289 [2024-05-14 03:03:53.201320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:17:07.289 [2024-05-14 03:03:53.201334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.289 [2024-05-14 03:03:53.201385] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 3.761 ms, result 0 00:17:07.289 true 00:17:07.289 03:03:53 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 90364 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90364 ']' 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90364 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90364 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:07.289 killing process with pid 90364 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90364' 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 90364 00:17:07.289 03:03:53 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 90364 00:17:07.551 [2024-05-14 03:03:53.339893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.339967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:07.551 [2024-05-14 03:03:53.339993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:07.551 [2024-05-14 03:03:53.340006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.340050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:07.551 [2024-05-14 03:03:53.340568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.340622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:07.551 [2024-05-14 03:03:53.340646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:17:07.551 [2024-05-14 03:03:53.340661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.340984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.341021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:07.551 [2024-05-14 03:03:53.341037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:07.551 [2024-05-14 03:03:53.341050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.345469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.345520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:07.551 [2024-05-14 03:03:53.345541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.394 ms 00:17:07.551 [2024-05-14 03:03:53.345555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.353018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.353089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:07.551 [2024-05-14 03:03:53.353105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.414 ms 00:17:07.551 [2024-05-14 03:03:53.353120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.354531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.354591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:07.551 [2024-05-14 03:03:53.354624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:17:07.551 [2024-05-14 03:03:53.354638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.357965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.358044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:07.551 [2024-05-14 03:03:53.358077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:17:07.551 [2024-05-14 03:03:53.358101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.358335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.358371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:07.551 [2024-05-14 03:03:53.358398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:07.551 [2024-05-14 03:03:53.358412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.360295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.360366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:07.551 [2024-05-14 03:03:53.360400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:17:07.551 [2024-05-14 03:03:53.360419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.361833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.361909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:07.551 [2024-05-14 03:03:53.361926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:17:07.551 [2024-05-14 03:03:53.361940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.363159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.363263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:07.551 [2024-05-14 03:03:53.363280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:17:07.551 [2024-05-14 03:03:53.363293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.364561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.551 [2024-05-14 03:03:53.364637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:07.551 [2024-05-14 03:03:53.364653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:17:07.551 [2024-05-14 03:03:53.364680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.551 [2024-05-14 03:03:53.364735] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:07.551 [2024-05-14 03:03:53.364762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.364999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:07.551 [2024-05-14 03:03:53.365316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.365981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:07.552 [2024-05-14 03:03:53.366260] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:07.552 [2024-05-14 03:03:53.366275] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:17:07.552 [2024-05-14 03:03:53.366289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:07.552 [2024-05-14 03:03:53.366300] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:07.552 [2024-05-14 03:03:53.366314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:07.552 [2024-05-14 03:03:53.366331] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:07.552 [2024-05-14 03:03:53.366344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:07.552 [2024-05-14 03:03:53.366356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:07.552 [2024-05-14 03:03:53.366370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:07.552 [2024-05-14 03:03:53.366381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:07.552 [2024-05-14 03:03:53.366394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:07.552 [2024-05-14 03:03:53.366407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.552 [2024-05-14 03:03:53.366421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:07.552 [2024-05-14 03:03:53.366433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:17:07.552 [2024-05-14 03:03:53.366462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.552 [2024-05-14 03:03:53.367944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.552 [2024-05-14 03:03:53.367982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:07.552 [2024-05-14 03:03:53.367998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:17:07.552 [2024-05-14 03:03:53.368012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.552 [2024-05-14 03:03:53.368075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.552 [2024-05-14 03:03:53.368094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:07.552 [2024-05-14 03:03:53.368125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:07.552 [2024-05-14 03:03:53.368201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.552 [2024-05-14 03:03:53.373784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.552 [2024-05-14 03:03:53.373859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.552 [2024-05-14 03:03:53.373875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.552 [2024-05-14 03:03:53.373887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.552 [2024-05-14 03:03:53.374013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.552 [2024-05-14 03:03:53.374066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.552 [2024-05-14 03:03:53.374082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.552 [2024-05-14 03:03:53.374097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.552 [2024-05-14 03:03:53.374157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.552 [2024-05-14 03:03:53.374181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.552 [2024-05-14 03:03:53.374208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.552 [2024-05-14 03:03:53.374225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.374252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.374269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.553 [2024-05-14 03:03:53.374281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.374297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.383734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.383867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.553 [2024-05-14 03:03:53.383887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.383901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.387937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.553 [2024-05-14 03:03:53.388032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.388152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.553 [2024-05-14 03:03:53.388218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.388302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.553 [2024-05-14 03:03:53.388335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.388470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.553 [2024-05-14 03:03:53.388525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.388595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:07.553 [2024-05-14 03:03:53.388631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.388714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.553 [2024-05-14 03:03:53.388758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.388827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.553 [2024-05-14 03:03:53.388848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.553 [2024-05-14 03:03:53.388861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.553 [2024-05-14 03:03:53.388873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.553 [2024-05-14 03:03:53.389031] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.120 ms, result 0 00:17:07.840 03:03:53 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:07.840 03:03:53 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:07.840 [2024-05-14 03:03:53.662326] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:07.840 [2024-05-14 03:03:53.662484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90400 ] 00:17:07.840 [2024-05-14 03:03:53.796061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:07.840 [2024-05-14 03:03:53.815745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.840 [2024-05-14 03:03:53.848600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.110 [2024-05-14 03:03:53.931375] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.110 [2024-05-14 03:03:53.931495] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.110 [2024-05-14 03:03:54.080051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.080161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:08.110 [2024-05-14 03:03:54.080196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:08.110 [2024-05-14 03:03:54.080207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.082439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.082491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.110 [2024-05-14 03:03:54.082522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:17:08.110 [2024-05-14 03:03:54.082532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.082648] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:08.110 [2024-05-14 03:03:54.082945] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:08.110 [2024-05-14 03:03:54.082980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.082996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.110 [2024-05-14 03:03:54.083017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:08.110 [2024-05-14 03:03:54.083027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.084431] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:08.110 [2024-05-14 03:03:54.086593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.086652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:08.110 [2024-05-14 03:03:54.086682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:17:08.110 [2024-05-14 03:03:54.086693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.086762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.086781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:08.110 [2024-05-14 03:03:54.086795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:08.110 [2024-05-14 03:03:54.086805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.091080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.091158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.110 [2024-05-14 03:03:54.091178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.224 ms 00:17:08.110 [2024-05-14 03:03:54.091188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.091308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.091327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.110 [2024-05-14 03:03:54.091338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:08.110 [2024-05-14 03:03:54.091350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.091449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.091464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:08.110 [2024-05-14 03:03:54.091475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:08.110 [2024-05-14 03:03:54.091494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.091527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:08.110 [2024-05-14 03:03:54.092867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.092922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.110 [2024-05-14 03:03:54.092952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:17:08.110 [2024-05-14 03:03:54.092966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.093009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.093024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:08.110 [2024-05-14 03:03:54.093045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:08.110 [2024-05-14 03:03:54.093055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.093078] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:08.110 [2024-05-14 03:03:54.093105] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:08.110 [2024-05-14 03:03:54.093156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:08.110 [2024-05-14 03:03:54.093221] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:08.110 [2024-05-14 03:03:54.093304] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:08.110 [2024-05-14 03:03:54.093319] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:08.110 [2024-05-14 03:03:54.093333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:08.110 [2024-05-14 03:03:54.093360] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:08.110 [2024-05-14 03:03:54.093372] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:08.110 [2024-05-14 03:03:54.093383] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:08.110 [2024-05-14 03:03:54.093393] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:08.110 [2024-05-14 03:03:54.093403] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:08.110 [2024-05-14 03:03:54.093426] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:08.110 [2024-05-14 03:03:54.093439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.093450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:08.110 [2024-05-14 03:03:54.093461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:08.110 [2024-05-14 03:03:54.093471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.093564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.110 [2024-05-14 03:03:54.093584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:08.110 [2024-05-14 03:03:54.093615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:08.110 [2024-05-14 03:03:54.093632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.110 [2024-05-14 03:03:54.093733] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:08.110 [2024-05-14 03:03:54.093752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:08.110 [2024-05-14 03:03:54.093773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.110 [2024-05-14 03:03:54.093791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.110 [2024-05-14 03:03:54.093802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:08.110 [2024-05-14 03:03:54.093811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:08.110 [2024-05-14 03:03:54.093820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:08.110 [2024-05-14 03:03:54.093830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:08.110 [2024-05-14 03:03:54.093839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:08.110 [2024-05-14 03:03:54.093848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.110 [2024-05-14 03:03:54.093858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:08.110 [2024-05-14 03:03:54.093868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:08.110 [2024-05-14 03:03:54.093880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.110 [2024-05-14 03:03:54.093899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:08.110 [2024-05-14 03:03:54.093909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:08.110 [2024-05-14 03:03:54.093918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.110 [2024-05-14 03:03:54.093928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:08.110 [2024-05-14 03:03:54.093938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:08.110 [2024-05-14 03:03:54.093947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.110 [2024-05-14 03:03:54.093957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:08.110 [2024-05-14 03:03:54.093967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:08.110 [2024-05-14 03:03:54.093976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:08.110 [2024-05-14 03:03:54.093986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:08.110 [2024-05-14 03:03:54.093995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:08.110 [2024-05-14 03:03:54.094004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.110 [2024-05-14 03:03:54.094013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:08.110 [2024-05-14 03:03:54.094022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:08.110 [2024-05-14 03:03:54.094031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.110 [2024-05-14 03:03:54.094045] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:08.110 [2024-05-14 03:03:54.094055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:08.110 [2024-05-14 03:03:54.094064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.110 [2024-05-14 03:03:54.094073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:08.110 [2024-05-14 03:03:54.094083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:08.110 [2024-05-14 03:03:54.094092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.110 [2024-05-14 03:03:54.094101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:08.111 [2024-05-14 03:03:54.094110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:08.111 [2024-05-14 03:03:54.094119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.111 [2024-05-14 03:03:54.094128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:08.111 [2024-05-14 03:03:54.094137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:08.111 [2024-05-14 03:03:54.094161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.111 [2024-05-14 03:03:54.094172] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:08.111 [2024-05-14 03:03:54.094192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:08.111 [2024-05-14 03:03:54.094203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.111 [2024-05-14 03:03:54.094213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.111 [2024-05-14 03:03:54.094226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:08.111 [2024-05-14 03:03:54.094236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:08.111 [2024-05-14 03:03:54.094246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:08.111 [2024-05-14 03:03:54.094255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:08.111 [2024-05-14 03:03:54.094264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:08.111 [2024-05-14 03:03:54.094273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:08.111 [2024-05-14 03:03:54.094284] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:08.111 [2024-05-14 03:03:54.094306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.111 [2024-05-14 03:03:54.094317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:08.111 [2024-05-14 03:03:54.094328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:08.111 [2024-05-14 03:03:54.094338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:08.111 [2024-05-14 03:03:54.094348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:08.111 [2024-05-14 03:03:54.094358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:08.111 [2024-05-14 03:03:54.094369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:08.111 [2024-05-14 03:03:54.094378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:08.111 [2024-05-14 03:03:54.094389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:08.111 [2024-05-14 03:03:54.094402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:08.111 [2024-05-14 03:03:54.094413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:08.111 [2024-05-14 03:03:54.094423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:08.111 [2024-05-14 03:03:54.094434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:08.111 [2024-05-14 03:03:54.094444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:08.111 [2024-05-14 03:03:54.094454] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:08.111 [2024-05-14 03:03:54.094470] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.111 [2024-05-14 03:03:54.094482] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:08.111 [2024-05-14 03:03:54.094492] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:08.111 [2024-05-14 03:03:54.094502] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:08.111 [2024-05-14 03:03:54.094513] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:08.111 [2024-05-14 03:03:54.094525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.094536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:08.111 [2024-05-14 03:03:54.094546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:17:08.111 [2024-05-14 03:03:54.094556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.100547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.100601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.111 [2024-05-14 03:03:54.100633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:17:08.111 [2024-05-14 03:03:54.100643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.100771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.100788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.111 [2024-05-14 03:03:54.100800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:08.111 [2024-05-14 03:03:54.100813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.120511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.120570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.111 [2024-05-14 03:03:54.120604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.637 ms 00:17:08.111 [2024-05-14 03:03:54.120614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.120709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.120727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.111 [2024-05-14 03:03:54.120738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.111 [2024-05-14 03:03:54.120748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.121110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.121158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.111 [2024-05-14 03:03:54.121174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:17:08.111 [2024-05-14 03:03:54.121185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.121331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.121363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.111 [2024-05-14 03:03:54.121381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:17:08.111 [2024-05-14 03:03:54.121393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.126857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.126899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.111 [2024-05-14 03:03:54.126944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.434 ms 00:17:08.111 [2024-05-14 03:03:54.126964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.111 [2024-05-14 03:03:54.129301] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:08.111 [2024-05-14 03:03:54.129346] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:08.111 [2024-05-14 03:03:54.129374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.111 [2024-05-14 03:03:54.129387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:08.111 [2024-05-14 03:03:54.129400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:17:08.111 [2024-05-14 03:03:54.129411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.145280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.145336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:08.369 [2024-05-14 03:03:54.145365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.798 ms 00:17:08.369 [2024-05-14 03:03:54.145377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.147341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.147393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:08.369 [2024-05-14 03:03:54.147412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:17:08.369 [2024-05-14 03:03:54.147423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.149241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.149305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:08.369 [2024-05-14 03:03:54.149320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:17:08.369 [2024-05-14 03:03:54.149331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.149627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.149655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:08.369 [2024-05-14 03:03:54.149669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:17:08.369 [2024-05-14 03:03:54.149680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.167988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.168065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:08.369 [2024-05-14 03:03:54.168096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.262 ms 00:17:08.369 [2024-05-14 03:03:54.168108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.175693] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:08.369 [2024-05-14 03:03:54.189873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.189964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:08.369 [2024-05-14 03:03:54.189983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.612 ms 00:17:08.369 [2024-05-14 03:03:54.190002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.190125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.190181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:08.369 [2024-05-14 03:03:54.190200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:08.369 [2024-05-14 03:03:54.190210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.190286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.369 [2024-05-14 03:03:54.190318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:08.369 [2024-05-14 03:03:54.190331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:08.369 [2024-05-14 03:03:54.190357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.369 [2024-05-14 03:03:54.192443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.370 [2024-05-14 03:03:54.192523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:08.370 [2024-05-14 03:03:54.192538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:17:08.370 [2024-05-14 03:03:54.192568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.370 [2024-05-14 03:03:54.192623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.370 [2024-05-14 03:03:54.192638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:08.370 [2024-05-14 03:03:54.192650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:08.370 [2024-05-14 03:03:54.192687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.370 [2024-05-14 03:03:54.192744] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:08.370 [2024-05-14 03:03:54.192775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.370 [2024-05-14 03:03:54.192786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:08.370 [2024-05-14 03:03:54.192837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:08.370 [2024-05-14 03:03:54.192848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.370 [2024-05-14 03:03:54.196643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.370 [2024-05-14 03:03:54.196743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:08.370 [2024-05-14 03:03:54.196760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:17:08.370 [2024-05-14 03:03:54.196771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.370 [2024-05-14 03:03:54.196873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.370 [2024-05-14 03:03:54.196892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:08.370 [2024-05-14 03:03:54.196905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:08.370 [2024-05-14 03:03:54.196917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.370 [2024-05-14 03:03:54.198021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.370 [2024-05-14 03:03:54.199257] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.651 ms, result 0 00:17:08.370 [2024-05-14 03:03:54.200037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.370 [2024-05-14 03:03:54.207991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.108  Copying: 24/256 [MB] (24 MBps) Copying: 46/256 [MB] (22 MBps) Copying: 68/256 [MB] (21 MBps) Copying: 89/256 [MB] (21 MBps) Copying: 111/256 [MB] (21 MBps) Copying: 132/256 [MB] (21 MBps) Copying: 154/256 [MB] (21 MBps) Copying: 175/256 [MB] (21 MBps) Copying: 196/256 [MB] (21 MBps) Copying: 217/256 [MB] (21 MBps) Copying: 238/256 [MB] (20 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-05-14 03:04:06.009032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.108 [2024-05-14 03:04:06.010231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.010299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.108 [2024-05-14 03:04:06.010318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.108 [2024-05-14 03:04:06.010329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.010358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.108 [2024-05-14 03:04:06.010797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.010833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.108 [2024-05-14 03:04:06.010852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:20.108 [2024-05-14 03:04:06.010862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.011122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.011176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.108 [2024-05-14 03:04:06.011190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:20.108 [2024-05-14 03:04:06.011201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.014865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.014923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.108 [2024-05-14 03:04:06.014963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.643 ms 00:17:20.108 [2024-05-14 03:04:06.014977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.021903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.021949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:20.108 [2024-05-14 03:04:06.021978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:17:20.108 [2024-05-14 03:04:06.021988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.023403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.023470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.108 [2024-05-14 03:04:06.023515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:17:20.108 [2024-05-14 03:04:06.023525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.026901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.026954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.108 [2024-05-14 03:04:06.026979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:17:20.108 [2024-05-14 03:04:06.026996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.027144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.027188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.108 [2024-05-14 03:04:06.027201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:20.108 [2024-05-14 03:04:06.027212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.029025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.029087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:20.108 [2024-05-14 03:04:06.029111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:17:20.108 [2024-05-14 03:04:06.029121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.030666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.030720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:20.108 [2024-05-14 03:04:06.030734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:17:20.108 [2024-05-14 03:04:06.030744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.032022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.032075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.108 [2024-05-14 03:04:06.032089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:17:20.108 [2024-05-14 03:04:06.032115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.033536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.108 [2024-05-14 03:04:06.033587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.108 [2024-05-14 03:04:06.033600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:17:20.108 [2024-05-14 03:04:06.033610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.108 [2024-05-14 03:04:06.033646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.108 [2024-05-14 03:04:06.033667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.108 [2024-05-14 03:04:06.033883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.033997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.109 [2024-05-14 03:04:06.034816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.109 [2024-05-14 03:04:06.034826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:17:20.109 [2024-05-14 03:04:06.034837] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.109 [2024-05-14 03:04:06.034854] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.109 [2024-05-14 03:04:06.034864] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.109 [2024-05-14 03:04:06.034874] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.109 [2024-05-14 03:04:06.034884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.109 [2024-05-14 03:04:06.034894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.109 [2024-05-14 03:04:06.034915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.109 [2024-05-14 03:04:06.034925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.109 [2024-05-14 03:04:06.034934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.110 [2024-05-14 03:04:06.034944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.110 [2024-05-14 03:04:06.034955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.110 [2024-05-14 03:04:06.034970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:17:20.110 [2024-05-14 03:04:06.034980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.036512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.110 [2024-05-14 03:04:06.036543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.110 [2024-05-14 03:04:06.036557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:17:20.110 [2024-05-14 03:04:06.036584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.036675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.110 [2024-05-14 03:04:06.036690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.110 [2024-05-14 03:04:06.036703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:20.110 [2024-05-14 03:04:06.036714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.042322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.042373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.110 [2024-05-14 03:04:06.042403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.042415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.042487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.042504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.110 [2024-05-14 03:04:06.042517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.042539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.042604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.042622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.110 [2024-05-14 03:04:06.042635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.042646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.042672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.042685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.110 [2024-05-14 03:04:06.042697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.042708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.050997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.051064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.110 [2024-05-14 03:04:06.051095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.051105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.110 [2024-05-14 03:04:06.055310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.110 [2024-05-14 03:04:06.055389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.110 [2024-05-14 03:04:06.055485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.110 [2024-05-14 03:04:06.055622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.110 [2024-05-14 03:04:06.055739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.110 [2024-05-14 03:04:06.055855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.055931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.110 [2024-05-14 03:04:06.055947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.110 [2024-05-14 03:04:06.055959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.110 [2024-05-14 03:04:06.055981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.110 [2024-05-14 03:04:06.056192] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.878 ms, result 0 00:17:20.393 00:17:20.393 00:17:20.393 03:04:06 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:20.393 03:04:06 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:20.961 03:04:06 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:20.961 [2024-05-14 03:04:06.908576] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:20.961 [2024-05-14 03:04:06.908788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90538 ] 00:17:21.219 [2024-05-14 03:04:07.056877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:21.219 [2024-05-14 03:04:07.077917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.219 [2024-05-14 03:04:07.112928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.219 [2024-05-14 03:04:07.196974] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.219 [2024-05-14 03:04:07.197078] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.481 [2024-05-14 03:04:07.348643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.348705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.481 [2024-05-14 03:04:07.348739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:21.481 [2024-05-14 03:04:07.348750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.351198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.351266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.481 [2024-05-14 03:04:07.351282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:17:21.481 [2024-05-14 03:04:07.351292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.351384] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.481 [2024-05-14 03:04:07.351724] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.481 [2024-05-14 03:04:07.351756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.351772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.481 [2024-05-14 03:04:07.351785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:17:21.481 [2024-05-14 03:04:07.351796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.353332] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:21.481 [2024-05-14 03:04:07.355512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.355565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:21.481 [2024-05-14 03:04:07.355597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:17:21.481 [2024-05-14 03:04:07.355608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.355679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.355699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:21.481 [2024-05-14 03:04:07.355715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:21.481 [2024-05-14 03:04:07.355727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.360005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.360050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.481 [2024-05-14 03:04:07.360069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:17:21.481 [2024-05-14 03:04:07.360089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.360283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.360320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.481 [2024-05-14 03:04:07.360333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:21.481 [2024-05-14 03:04:07.360347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.360404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.360420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.481 [2024-05-14 03:04:07.360432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:21.481 [2024-05-14 03:04:07.360443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.360478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:21.481 [2024-05-14 03:04:07.361764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.361814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.481 [2024-05-14 03:04:07.361829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:17:21.481 [2024-05-14 03:04:07.361859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.361903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.481 [2024-05-14 03:04:07.361919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.481 [2024-05-14 03:04:07.361931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:21.481 [2024-05-14 03:04:07.361941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.481 [2024-05-14 03:04:07.361964] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:21.481 [2024-05-14 03:04:07.362008] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:21.481 [2024-05-14 03:04:07.362061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:21.481 [2024-05-14 03:04:07.362094] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:21.482 [2024-05-14 03:04:07.362215] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:21.482 [2024-05-14 03:04:07.362238] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.482 [2024-05-14 03:04:07.362254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:21.482 [2024-05-14 03:04:07.362268] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362281] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362293] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:21.482 [2024-05-14 03:04:07.362303] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.482 [2024-05-14 03:04:07.362314] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:21.482 [2024-05-14 03:04:07.362329] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:21.482 [2024-05-14 03:04:07.362341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.482 [2024-05-14 03:04:07.362352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.482 [2024-05-14 03:04:07.362363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:17:21.482 [2024-05-14 03:04:07.362374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.482 [2024-05-14 03:04:07.362450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.482 [2024-05-14 03:04:07.362475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.482 [2024-05-14 03:04:07.362488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:21.482 [2024-05-14 03:04:07.362499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.482 [2024-05-14 03:04:07.362597] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.482 [2024-05-14 03:04:07.362614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.482 [2024-05-14 03:04:07.362626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.482 [2024-05-14 03:04:07.362658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.482 [2024-05-14 03:04:07.362688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.482 [2024-05-14 03:04:07.362708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.482 [2024-05-14 03:04:07.362717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:21.482 [2024-05-14 03:04:07.362733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.482 [2024-05-14 03:04:07.362755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.482 [2024-05-14 03:04:07.362766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:21.482 [2024-05-14 03:04:07.362776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.482 [2024-05-14 03:04:07.362796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:21.482 [2024-05-14 03:04:07.362806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:21.482 [2024-05-14 03:04:07.362825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:21.482 [2024-05-14 03:04:07.362835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.482 [2024-05-14 03:04:07.362855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.482 [2024-05-14 03:04:07.362884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.482 [2024-05-14 03:04:07.362920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.482 [2024-05-14 03:04:07.362949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:21.482 [2024-05-14 03:04:07.362968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.482 [2024-05-14 03:04:07.362978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:21.482 [2024-05-14 03:04:07.362987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.482 [2024-05-14 03:04:07.362997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.482 [2024-05-14 03:04:07.363008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:21.482 [2024-05-14 03:04:07.363017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.482 [2024-05-14 03:04:07.363026] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.482 [2024-05-14 03:04:07.363037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.482 [2024-05-14 03:04:07.363047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.482 [2024-05-14 03:04:07.363058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.482 [2024-05-14 03:04:07.363072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.482 [2024-05-14 03:04:07.363083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.482 [2024-05-14 03:04:07.363093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.482 [2024-05-14 03:04:07.363103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.482 [2024-05-14 03:04:07.363113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.482 [2024-05-14 03:04:07.363122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.482 [2024-05-14 03:04:07.363133] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.482 [2024-05-14 03:04:07.363160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.482 [2024-05-14 03:04:07.363190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:21.482 [2024-05-14 03:04:07.363202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:21.482 [2024-05-14 03:04:07.363213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:21.482 [2024-05-14 03:04:07.363224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:21.482 [2024-05-14 03:04:07.363235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:21.482 [2024-05-14 03:04:07.363246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:21.482 [2024-05-14 03:04:07.363257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:21.482 [2024-05-14 03:04:07.363267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:21.482 [2024-05-14 03:04:07.363281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:21.482 [2024-05-14 03:04:07.363293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:21.482 [2024-05-14 03:04:07.363303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:21.482 [2024-05-14 03:04:07.363315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:21.482 [2024-05-14 03:04:07.363326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:21.482 [2024-05-14 03:04:07.363336] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.482 [2024-05-14 03:04:07.363352] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.482 [2024-05-14 03:04:07.363363] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.482 [2024-05-14 03:04:07.363374] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.482 [2024-05-14 03:04:07.363385] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.482 [2024-05-14 03:04:07.363396] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.482 [2024-05-14 03:04:07.363408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.482 [2024-05-14 03:04:07.363420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.482 [2024-05-14 03:04:07.363431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:17:21.482 [2024-05-14 03:04:07.363452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.482 [2024-05-14 03:04:07.369930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.482 [2024-05-14 03:04:07.369993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.482 [2024-05-14 03:04:07.370009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.390 ms 00:17:21.482 [2024-05-14 03:04:07.370030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.482 [2024-05-14 03:04:07.370198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.482 [2024-05-14 03:04:07.370218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.482 [2024-05-14 03:04:07.370242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:21.482 [2024-05-14 03:04:07.370257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.482 [2024-05-14 03:04:07.390534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.390612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.483 [2024-05-14 03:04:07.390648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.235 ms 00:17:21.483 [2024-05-14 03:04:07.390664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.390827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.390855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.483 [2024-05-14 03:04:07.390873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:21.483 [2024-05-14 03:04:07.390889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.391313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.391373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.483 [2024-05-14 03:04:07.391402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:17:21.483 [2024-05-14 03:04:07.391417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.391644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.391673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.483 [2024-05-14 03:04:07.391690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:17:21.483 [2024-05-14 03:04:07.391715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.397908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.397972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.483 [2024-05-14 03:04:07.397988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.155 ms 00:17:21.483 [2024-05-14 03:04:07.397998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.400357] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:21.483 [2024-05-14 03:04:07.400414] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:21.483 [2024-05-14 03:04:07.400446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.400457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:21.483 [2024-05-14 03:04:07.400468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.339 ms 00:17:21.483 [2024-05-14 03:04:07.400479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.413860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.413928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:21.483 [2024-05-14 03:04:07.413962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.316 ms 00:17:21.483 [2024-05-14 03:04:07.413972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.415805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.415902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:21.483 [2024-05-14 03:04:07.415918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:17:21.483 [2024-05-14 03:04:07.415928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.417593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.417643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:21.483 [2024-05-14 03:04:07.417672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:17:21.483 [2024-05-14 03:04:07.417689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.417948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.417977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.483 [2024-05-14 03:04:07.418000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:21.483 [2024-05-14 03:04:07.418011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.436284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.436358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:21.483 [2024-05-14 03:04:07.436377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.228 ms 00:17:21.483 [2024-05-14 03:04:07.436388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.444513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.483 [2024-05-14 03:04:07.457796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.457868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.483 [2024-05-14 03:04:07.457903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.295 ms 00:17:21.483 [2024-05-14 03:04:07.457914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.458030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.458058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:21.483 [2024-05-14 03:04:07.458076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.483 [2024-05-14 03:04:07.458087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.458177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.458211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.483 [2024-05-14 03:04:07.458223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:21.483 [2024-05-14 03:04:07.458234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.460002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.460064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:21.483 [2024-05-14 03:04:07.460094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:17:21.483 [2024-05-14 03:04:07.460125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.460229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.460261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.483 [2024-05-14 03:04:07.460272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:21.483 [2024-05-14 03:04:07.460283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.460325] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:21.483 [2024-05-14 03:04:07.460359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.460385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:21.483 [2024-05-14 03:04:07.460396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:21.483 [2024-05-14 03:04:07.460407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.464334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.464398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.483 [2024-05-14 03:04:07.464417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:17:21.483 [2024-05-14 03:04:07.464430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.464524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.483 [2024-05-14 03:04:07.464544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.483 [2024-05-14 03:04:07.464557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:21.483 [2024-05-14 03:04:07.464569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.483 [2024-05-14 03:04:07.465645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.483 [2024-05-14 03:04:07.466897] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.688 ms, result 0 00:17:21.483 [2024-05-14 03:04:07.467776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.483 [2024-05-14 03:04:07.475771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.743  Copying: 4096/4096 [kB] (average 21 MBps)[2024-05-14 03:04:07.664117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.743 [2024-05-14 03:04:07.665198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.665283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.743 [2024-05-14 03:04:07.665318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.743 [2024-05-14 03:04:07.665330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.665358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:21.743 [2024-05-14 03:04:07.665830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.665861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.743 [2024-05-14 03:04:07.665876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:17:21.743 [2024-05-14 03:04:07.665888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.667585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.667639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.743 [2024-05-14 03:04:07.667670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:17:21.743 [2024-05-14 03:04:07.667681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.671284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.671338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:21.743 [2024-05-14 03:04:07.671369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.581 ms 00:17:21.743 [2024-05-14 03:04:07.671389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.678616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.678663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:21.743 [2024-05-14 03:04:07.678677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.178 ms 00:17:21.743 [2024-05-14 03:04:07.678688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.680261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.680312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:21.743 [2024-05-14 03:04:07.680342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:17:21.743 [2024-05-14 03:04:07.680352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.683450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.683514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:21.743 [2024-05-14 03:04:07.683536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:17:21.743 [2024-05-14 03:04:07.683547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.683663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.683693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:21.743 [2024-05-14 03:04:07.683705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:21.743 [2024-05-14 03:04:07.683731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.685497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.685561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:21.743 [2024-05-14 03:04:07.685574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:17:21.743 [2024-05-14 03:04:07.685599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.687245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.687324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:21.743 [2024-05-14 03:04:07.687353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:17:21.743 [2024-05-14 03:04:07.687364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.688641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.688678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.743 [2024-05-14 03:04:07.688693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:17:21.743 [2024-05-14 03:04:07.688703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.690018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.743 [2024-05-14 03:04:07.690070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.743 [2024-05-14 03:04:07.690099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:17:21.743 [2024-05-14 03:04:07.690126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.743 [2024-05-14 03:04:07.690189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.743 [2024-05-14 03:04:07.690224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.743 [2024-05-14 03:04:07.690739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.690990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.744 [2024-05-14 03:04:07.691532] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.744 [2024-05-14 03:04:07.691544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:17:21.744 [2024-05-14 03:04:07.691563] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:21.744 [2024-05-14 03:04:07.691574] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:21.744 [2024-05-14 03:04:07.691586] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:21.744 [2024-05-14 03:04:07.691597] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:21.744 [2024-05-14 03:04:07.691609] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.744 [2024-05-14 03:04:07.691620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.744 [2024-05-14 03:04:07.691646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.744 [2024-05-14 03:04:07.691657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.744 [2024-05-14 03:04:07.691670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.744 [2024-05-14 03:04:07.691681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.744 [2024-05-14 03:04:07.691693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.744 [2024-05-14 03:04:07.691711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:17:21.744 [2024-05-14 03:04:07.691723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.693225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.744 [2024-05-14 03:04:07.693283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.744 [2024-05-14 03:04:07.693299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:17:21.744 [2024-05-14 03:04:07.693310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.693378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.744 [2024-05-14 03:04:07.693396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.744 [2024-05-14 03:04:07.693423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:21.744 [2024-05-14 03:04:07.693434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.698536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.698587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.744 [2024-05-14 03:04:07.698617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.698627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.698683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.698698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.744 [2024-05-14 03:04:07.698709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.698720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.698775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.698791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.744 [2024-05-14 03:04:07.698802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.698828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.698883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.698897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.744 [2024-05-14 03:04:07.698920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.698931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.707148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.707225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.744 [2024-05-14 03:04:07.707257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.707268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.710908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.710959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.744 [2024-05-14 03:04:07.710989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.711049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.744 [2024-05-14 03:04:07.711061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.711114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.744 [2024-05-14 03:04:07.711124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.711306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.744 [2024-05-14 03:04:07.711319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.711407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.744 [2024-05-14 03:04:07.711419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.711501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.744 [2024-05-14 03:04:07.711514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.744 [2024-05-14 03:04:07.711606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.744 [2024-05-14 03:04:07.711626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.744 [2024-05-14 03:04:07.711637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.744 [2024-05-14 03:04:07.711836] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.567 ms, result 0 00:17:22.002 00:17:22.002 00:17:22.002 03:04:07 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=90552 00:17:22.002 03:04:07 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:22.002 03:04:07 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 90552 00:17:22.002 03:04:07 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 90552 ']' 00:17:22.002 03:04:07 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.002 03:04:07 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.002 03:04:07 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.002 03:04:07 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.002 03:04:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:22.260 [2024-05-14 03:04:08.057882] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:22.260 [2024-05-14 03:04:08.058079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90552 ] 00:17:22.260 [2024-05-14 03:04:08.206421] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:22.260 [2024-05-14 03:04:08.222631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.260 [2024-05-14 03:04:08.256475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.193 03:04:08 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.193 03:04:08 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:23.193 03:04:08 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:23.193 [2024-05-14 03:04:09.178708] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.193 [2024-05-14 03:04:09.178812] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.453 [2024-05-14 03:04:09.343697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.343798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.453 [2024-05-14 03:04:09.343868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:23.453 [2024-05-14 03:04:09.343885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.346734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.346816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.453 [2024-05-14 03:04:09.346842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:17:23.453 [2024-05-14 03:04:09.346857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.347153] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.453 [2024-05-14 03:04:09.347473] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.453 [2024-05-14 03:04:09.347515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.347533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.453 [2024-05-14 03:04:09.347547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:17:23.453 [2024-05-14 03:04:09.347560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.348969] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:23.453 [2024-05-14 03:04:09.351314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.351371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:23.453 [2024-05-14 03:04:09.351405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:17:23.453 [2024-05-14 03:04:09.351417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.351507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.351527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:23.453 [2024-05-14 03:04:09.351542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:23.453 [2024-05-14 03:04:09.351552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.356126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.356214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.453 [2024-05-14 03:04:09.356259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:17:23.453 [2024-05-14 03:04:09.356269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.356398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.356417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.453 [2024-05-14 03:04:09.356431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:23.453 [2024-05-14 03:04:09.356457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.356528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.356543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.453 [2024-05-14 03:04:09.356557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:23.453 [2024-05-14 03:04:09.356568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.356619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:23.453 [2024-05-14 03:04:09.358059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.358132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.453 [2024-05-14 03:04:09.358179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:17:23.453 [2024-05-14 03:04:09.358211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.453 [2024-05-14 03:04:09.358269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.453 [2024-05-14 03:04:09.358298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.453 [2024-05-14 03:04:09.358311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:23.453 [2024-05-14 03:04:09.358324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.454 [2024-05-14 03:04:09.358367] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:23.454 [2024-05-14 03:04:09.358396] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:23.454 [2024-05-14 03:04:09.358463] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:23.454 [2024-05-14 03:04:09.358499] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:23.454 [2024-05-14 03:04:09.358610] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:23.454 [2024-05-14 03:04:09.358628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.454 [2024-05-14 03:04:09.358642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:23.454 [2024-05-14 03:04:09.358658] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.454 [2024-05-14 03:04:09.358672] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.454 [2024-05-14 03:04:09.358687] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:23.454 [2024-05-14 03:04:09.358698] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.454 [2024-05-14 03:04:09.358710] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:23.454 [2024-05-14 03:04:09.358724] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:23.454 [2024-05-14 03:04:09.358737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.454 [2024-05-14 03:04:09.358750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.454 [2024-05-14 03:04:09.358763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:23.454 [2024-05-14 03:04:09.358782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.454 [2024-05-14 03:04:09.358859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.454 [2024-05-14 03:04:09.358873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.454 [2024-05-14 03:04:09.358888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:23.454 [2024-05-14 03:04:09.358899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.454 [2024-05-14 03:04:09.358995] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.454 [2024-05-14 03:04:09.359013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.454 [2024-05-14 03:04:09.359030] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.454 [2024-05-14 03:04:09.359076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.454 [2024-05-14 03:04:09.359110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.454 [2024-05-14 03:04:09.359132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.454 [2024-05-14 03:04:09.359142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:23.454 [2024-05-14 03:04:09.359153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.454 [2024-05-14 03:04:09.359164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.454 [2024-05-14 03:04:09.359176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:23.454 [2024-05-14 03:04:09.359185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.454 [2024-05-14 03:04:09.359226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:23.454 [2024-05-14 03:04:09.359239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:23.454 [2024-05-14 03:04:09.359265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:23.454 [2024-05-14 03:04:09.359276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359300] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.454 [2024-05-14 03:04:09.359310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.454 [2024-05-14 03:04:09.359344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.454 [2024-05-14 03:04:09.359375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.454 [2024-05-14 03:04:09.359408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.454 [2024-05-14 03:04:09.359440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.454 [2024-05-14 03:04:09.359464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.454 [2024-05-14 03:04:09.359475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:23.454 [2024-05-14 03:04:09.359485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.454 [2024-05-14 03:04:09.359497] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.454 [2024-05-14 03:04:09.359508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.454 [2024-05-14 03:04:09.359521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.454 [2024-05-14 03:04:09.359544] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.454 [2024-05-14 03:04:09.359555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.454 [2024-05-14 03:04:09.359568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.454 [2024-05-14 03:04:09.359578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.454 [2024-05-14 03:04:09.359590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.454 [2024-05-14 03:04:09.359600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.454 [2024-05-14 03:04:09.359613] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.454 [2024-05-14 03:04:09.359626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.454 [2024-05-14 03:04:09.359643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:23.454 [2024-05-14 03:04:09.359654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:23.454 [2024-05-14 03:04:09.359667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:23.454 [2024-05-14 03:04:09.359679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:23.454 [2024-05-14 03:04:09.359691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:23.454 [2024-05-14 03:04:09.359703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:23.454 [2024-05-14 03:04:09.359715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:23.454 [2024-05-14 03:04:09.359726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:23.454 [2024-05-14 03:04:09.359738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:23.454 [2024-05-14 03:04:09.359749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:23.454 [2024-05-14 03:04:09.359761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:23.454 [2024-05-14 03:04:09.359773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:23.454 [2024-05-14 03:04:09.359786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:23.454 [2024-05-14 03:04:09.359796] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.454 [2024-05-14 03:04:09.359810] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.454 [2024-05-14 03:04:09.359850] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.454 [2024-05-14 03:04:09.359868] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.454 [2024-05-14 03:04:09.359880] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.454 [2024-05-14 03:04:09.359893] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.454 [2024-05-14 03:04:09.359907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.454 [2024-05-14 03:04:09.359923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.454 [2024-05-14 03:04:09.359935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:17:23.454 [2024-05-14 03:04:09.359947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.454 [2024-05-14 03:04:09.366117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.454 [2024-05-14 03:04:09.366220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.454 [2024-05-14 03:04:09.366238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:17:23.454 [2024-05-14 03:04:09.366251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.454 [2024-05-14 03:04:09.366397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.454 [2024-05-14 03:04:09.366435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.455 [2024-05-14 03:04:09.366466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:23.455 [2024-05-14 03:04:09.366478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.375553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.375644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.455 [2024-05-14 03:04:09.375660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.046 ms 00:17:23.455 [2024-05-14 03:04:09.375673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.375763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.375786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.455 [2024-05-14 03:04:09.375798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.455 [2024-05-14 03:04:09.375809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.376263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.376295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.455 [2024-05-14 03:04:09.376310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:17:23.455 [2024-05-14 03:04:09.376322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.376480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.376504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.455 [2024-05-14 03:04:09.376519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:23.455 [2024-05-14 03:04:09.376531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.382159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.382226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.455 [2024-05-14 03:04:09.382259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.600 ms 00:17:23.455 [2024-05-14 03:04:09.382272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.384658] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:23.455 [2024-05-14 03:04:09.384698] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:23.455 [2024-05-14 03:04:09.384730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.384752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:23.455 [2024-05-14 03:04:09.384764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:17:23.455 [2024-05-14 03:04:09.384775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.397818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.397892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:23.455 [2024-05-14 03:04:09.397909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.996 ms 00:17:23.455 [2024-05-14 03:04:09.397921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.400117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.400175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:23.455 [2024-05-14 03:04:09.400193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:17:23.455 [2024-05-14 03:04:09.400209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.401830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.401877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.455 [2024-05-14 03:04:09.401893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:17:23.455 [2024-05-14 03:04:09.401907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.402187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.402224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.455 [2024-05-14 03:04:09.402239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:17:23.455 [2024-05-14 03:04:09.402253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.420850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.420949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.455 [2024-05-14 03:04:09.420968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.567 ms 00:17:23.455 [2024-05-14 03:04:09.420981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.428389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.455 [2024-05-14 03:04:09.440450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.440531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.455 [2024-05-14 03:04:09.440581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.358 ms 00:17:23.455 [2024-05-14 03:04:09.440592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.440699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.440747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.455 [2024-05-14 03:04:09.440777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:23.455 [2024-05-14 03:04:09.440791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.440851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.440874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.455 [2024-05-14 03:04:09.440888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:23.455 [2024-05-14 03:04:09.440899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.442669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.442718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:23.455 [2024-05-14 03:04:09.442749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:17:23.455 [2024-05-14 03:04:09.442760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.442802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.442818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.455 [2024-05-14 03:04:09.442834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:23.455 [2024-05-14 03:04:09.442844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.442915] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.455 [2024-05-14 03:04:09.442930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.442943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.455 [2024-05-14 03:04:09.442954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:23.455 [2024-05-14 03:04:09.442966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.446526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.446587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.455 [2024-05-14 03:04:09.446619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.533 ms 00:17:23.455 [2024-05-14 03:04:09.446631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.446707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.455 [2024-05-14 03:04:09.446760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.455 [2024-05-14 03:04:09.446788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:23.455 [2024-05-14 03:04:09.446800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.455 [2024-05-14 03:04:09.448042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.455 [2024-05-14 03:04:09.449278] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.987 ms, result 0 00:17:23.455 [2024-05-14 03:04:09.450503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.455 Some configs were skipped because the RPC state that can call them passed over. 00:17:23.714 03:04:09 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:23.714 [2024-05-14 03:04:09.698477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.714 [2024-05-14 03:04:09.698561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:23.714 [2024-05-14 03:04:09.698607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.410 ms 00:17:23.714 [2024-05-14 03:04:09.698619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.714 [2024-05-14 03:04:09.698666] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.607 ms, result 0 00:17:23.714 true 00:17:23.714 03:04:09 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:23.973 [2024-05-14 03:04:09.910352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.973 [2024-05-14 03:04:09.910436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:23.973 [2024-05-14 03:04:09.910454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.572 ms 00:17:23.973 [2024-05-14 03:04:09.910466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.973 [2024-05-14 03:04:09.910508] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 3.732 ms, result 0 00:17:23.973 true 00:17:23.973 03:04:09 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 90552 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90552 ']' 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90552 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90552 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:23.973 killing process with pid 90552 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90552' 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 90552 00:17:23.973 03:04:09 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 90552 00:17:24.234 [2024-05-14 03:04:10.048621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.048690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.234 [2024-05-14 03:04:10.048737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.234 [2024-05-14 03:04:10.048748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.234 [2024-05-14 03:04:10.048784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.234 [2024-05-14 03:04:10.049290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.049329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.234 [2024-05-14 03:04:10.049343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:17:24.234 [2024-05-14 03:04:10.049357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.234 [2024-05-14 03:04:10.049670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.049702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.234 [2024-05-14 03:04:10.049716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:24.234 [2024-05-14 03:04:10.049738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.234 [2024-05-14 03:04:10.053517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.053590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.234 [2024-05-14 03:04:10.053617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:17:24.234 [2024-05-14 03:04:10.053632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.234 [2024-05-14 03:04:10.060926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.060984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:24.234 [2024-05-14 03:04:10.061001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.178 ms 00:17:24.234 [2024-05-14 03:04:10.061017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.234 [2024-05-14 03:04:10.062350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.062393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.234 [2024-05-14 03:04:10.062410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:17:24.234 [2024-05-14 03:04:10.062423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.234 [2024-05-14 03:04:10.065597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.234 [2024-05-14 03:04:10.065645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.235 [2024-05-14 03:04:10.065675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:17:24.235 [2024-05-14 03:04:10.065688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.235 [2024-05-14 03:04:10.065878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.235 [2024-05-14 03:04:10.065906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.235 [2024-05-14 03:04:10.065919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:24.235 [2024-05-14 03:04:10.065932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.235 [2024-05-14 03:04:10.067619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.235 [2024-05-14 03:04:10.067674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:24.235 [2024-05-14 03:04:10.067689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:17:24.235 [2024-05-14 03:04:10.067705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.235 [2024-05-14 03:04:10.069031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.235 [2024-05-14 03:04:10.069118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:24.235 [2024-05-14 03:04:10.069132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:17:24.235 [2024-05-14 03:04:10.069144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.235 [2024-05-14 03:04:10.070460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.235 [2024-05-14 03:04:10.070522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.235 [2024-05-14 03:04:10.070551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:17:24.235 [2024-05-14 03:04:10.070563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.235 [2024-05-14 03:04:10.071766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.235 [2024-05-14 03:04:10.071858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.235 [2024-05-14 03:04:10.071876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:17:24.235 [2024-05-14 03:04:10.071889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.235 [2024-05-14 03:04:10.071932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.235 [2024-05-14 03:04:10.071959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.071974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.071990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.235 [2024-05-14 03:04:10.072628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.072997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.236 [2024-05-14 03:04:10.073305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.236 [2024-05-14 03:04:10.073319] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:17:24.236 [2024-05-14 03:04:10.073332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.236 [2024-05-14 03:04:10.073343] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.236 [2024-05-14 03:04:10.073355] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.236 [2024-05-14 03:04:10.073366] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.236 [2024-05-14 03:04:10.073378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.236 [2024-05-14 03:04:10.073389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.236 [2024-05-14 03:04:10.073401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.236 [2024-05-14 03:04:10.073410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.236 [2024-05-14 03:04:10.073421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.236 [2024-05-14 03:04:10.073432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.236 [2024-05-14 03:04:10.073445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.236 [2024-05-14 03:04:10.073456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:17:24.236 [2024-05-14 03:04:10.073472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.236 [2024-05-14 03:04:10.074876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.236 [2024-05-14 03:04:10.074912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.236 [2024-05-14 03:04:10.074927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:17:24.236 [2024-05-14 03:04:10.074939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.236 [2024-05-14 03:04:10.075006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.236 [2024-05-14 03:04:10.075041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.236 [2024-05-14 03:04:10.075055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:24.236 [2024-05-14 03:04:10.075068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.236 [2024-05-14 03:04:10.080942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.236 [2024-05-14 03:04:10.081004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.236 [2024-05-14 03:04:10.081020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.236 [2024-05-14 03:04:10.081033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.236 [2024-05-14 03:04:10.081112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.236 [2024-05-14 03:04:10.081161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.236 [2024-05-14 03:04:10.081230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.236 [2024-05-14 03:04:10.081261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.236 [2024-05-14 03:04:10.081339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.236 [2024-05-14 03:04:10.081386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.236 [2024-05-14 03:04:10.081400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.236 [2024-05-14 03:04:10.081413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.236 [2024-05-14 03:04:10.081455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.236 [2024-05-14 03:04:10.081477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.237 [2024-05-14 03:04:10.081489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.081504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.090380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.090457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.237 [2024-05-14 03:04:10.090473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.090485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.237 [2024-05-14 03:04:10.094106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.237 [2024-05-14 03:04:10.094258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.237 [2024-05-14 03:04:10.094343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.237 [2024-05-14 03:04:10.094493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.237 [2024-05-14 03:04:10.094607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.237 [2024-05-14 03:04:10.094710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.237 [2024-05-14 03:04:10.094793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.237 [2024-05-14 03:04:10.094805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.237 [2024-05-14 03:04:10.094817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.237 [2024-05-14 03:04:10.094980] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.336 ms, result 0 00:17:24.496 03:04:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.496 [2024-05-14 03:04:10.374250] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:24.496 [2024-05-14 03:04:10.374394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90588 ] 00:17:24.496 [2024-05-14 03:04:10.511358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:24.754 [2024-05-14 03:04:10.531985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.754 [2024-05-14 03:04:10.569194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.754 [2024-05-14 03:04:10.650275] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.754 [2024-05-14 03:04:10.650345] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.013 [2024-05-14 03:04:10.801021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.801086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.013 [2024-05-14 03:04:10.801104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:25.013 [2024-05-14 03:04:10.801125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.803917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.803959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.013 [2024-05-14 03:04:10.803975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:17:25.013 [2024-05-14 03:04:10.803987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.804098] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.013 [2024-05-14 03:04:10.804491] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.013 [2024-05-14 03:04:10.804525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.804541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.013 [2024-05-14 03:04:10.804563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:17:25.013 [2024-05-14 03:04:10.804574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.805967] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.013 [2024-05-14 03:04:10.808334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.808383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.013 [2024-05-14 03:04:10.808421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:17:25.013 [2024-05-14 03:04:10.808441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.808513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.808547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.013 [2024-05-14 03:04:10.808561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:25.013 [2024-05-14 03:04:10.808571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.813149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.813208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.013 [2024-05-14 03:04:10.813241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.527 ms 00:17:25.013 [2024-05-14 03:04:10.813251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.813385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.813404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.013 [2024-05-14 03:04:10.813415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:25.013 [2024-05-14 03:04:10.813444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.813523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.813539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.013 [2024-05-14 03:04:10.813560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:25.013 [2024-05-14 03:04:10.813571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.813599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:25.013 [2024-05-14 03:04:10.814959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.815022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.013 [2024-05-14 03:04:10.815067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:17:25.013 [2024-05-14 03:04:10.815083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.815127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.815142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.013 [2024-05-14 03:04:10.815169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:25.013 [2024-05-14 03:04:10.815180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.815226] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.013 [2024-05-14 03:04:10.815257] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:25.013 [2024-05-14 03:04:10.815304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.013 [2024-05-14 03:04:10.815341] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:25.013 [2024-05-14 03:04:10.815457] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:25.013 [2024-05-14 03:04:10.815473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.013 [2024-05-14 03:04:10.815487] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:25.013 [2024-05-14 03:04:10.815515] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.013 [2024-05-14 03:04:10.815528] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.013 [2024-05-14 03:04:10.815542] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:25.013 [2024-05-14 03:04:10.815552] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.013 [2024-05-14 03:04:10.815562] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:25.013 [2024-05-14 03:04:10.815577] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:25.013 [2024-05-14 03:04:10.815588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.815598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.013 [2024-05-14 03:04:10.815609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:25.013 [2024-05-14 03:04:10.815620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.815706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.815721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.013 [2024-05-14 03:04:10.815743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:25.013 [2024-05-14 03:04:10.815753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.815909] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.013 [2024-05-14 03:04:10.815928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.013 [2024-05-14 03:04:10.815951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.013 [2024-05-14 03:04:10.815963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.013 [2024-05-14 03:04:10.815975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.013 [2024-05-14 03:04:10.815986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.013 [2024-05-14 03:04:10.815996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.013 [2024-05-14 03:04:10.816017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.013 [2024-05-14 03:04:10.816050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.013 [2024-05-14 03:04:10.816060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:25.013 [2024-05-14 03:04:10.816073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.013 [2024-05-14 03:04:10.816095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.013 [2024-05-14 03:04:10.816106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:25.013 [2024-05-14 03:04:10.816115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.013 [2024-05-14 03:04:10.816167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:25.013 [2024-05-14 03:04:10.816176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:25.013 [2024-05-14 03:04:10.816212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:25.013 [2024-05-14 03:04:10.816267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.013 [2024-05-14 03:04:10.816290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.013 [2024-05-14 03:04:10.816316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.013 [2024-05-14 03:04:10.816365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.013 [2024-05-14 03:04:10.816408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.013 [2024-05-14 03:04:10.816452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.013 [2024-05-14 03:04:10.816472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.013 [2024-05-14 03:04:10.816482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:25.013 [2024-05-14 03:04:10.816491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.013 [2024-05-14 03:04:10.816500] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.013 [2024-05-14 03:04:10.816511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.013 [2024-05-14 03:04:10.816521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.013 [2024-05-14 03:04:10.816559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.013 [2024-05-14 03:04:10.816570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.013 [2024-05-14 03:04:10.816580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.013 [2024-05-14 03:04:10.816589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.013 [2024-05-14 03:04:10.816598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.013 [2024-05-14 03:04:10.816609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.013 [2024-05-14 03:04:10.816619] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.013 [2024-05-14 03:04:10.816648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.013 [2024-05-14 03:04:10.816661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:25.013 [2024-05-14 03:04:10.816673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:25.013 [2024-05-14 03:04:10.816700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:25.013 [2024-05-14 03:04:10.816711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:25.013 [2024-05-14 03:04:10.816722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:25.013 [2024-05-14 03:04:10.816733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:25.013 [2024-05-14 03:04:10.816744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:25.013 [2024-05-14 03:04:10.816755] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:25.013 [2024-05-14 03:04:10.816769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:25.013 [2024-05-14 03:04:10.816781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:25.013 [2024-05-14 03:04:10.816792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:25.013 [2024-05-14 03:04:10.816804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:25.013 [2024-05-14 03:04:10.816816] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:25.013 [2024-05-14 03:04:10.816827] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.013 [2024-05-14 03:04:10.816843] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.013 [2024-05-14 03:04:10.816857] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.013 [2024-05-14 03:04:10.816868] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.013 [2024-05-14 03:04:10.816880] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.013 [2024-05-14 03:04:10.816892] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.013 [2024-05-14 03:04:10.816905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.816917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.013 [2024-05-14 03:04:10.816929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:17:25.013 [2024-05-14 03:04:10.816939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.823050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.823104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.013 [2024-05-14 03:04:10.823119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.041 ms 00:17:25.013 [2024-05-14 03:04:10.823130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.823287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.823337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.013 [2024-05-14 03:04:10.823366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:25.013 [2024-05-14 03:04:10.823382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.841943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.842017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.013 [2024-05-14 03:04:10.842055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:17:25.013 [2024-05-14 03:04:10.842072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.842260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.842310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.013 [2024-05-14 03:04:10.842329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:25.013 [2024-05-14 03:04:10.842345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.842831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.842877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.013 [2024-05-14 03:04:10.842896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:25.013 [2024-05-14 03:04:10.842912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.843114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.843166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.013 [2024-05-14 03:04:10.843186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:25.013 [2024-05-14 03:04:10.843201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.849894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.013 [2024-05-14 03:04:10.849942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.013 [2024-05-14 03:04:10.849956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:17:25.013 [2024-05-14 03:04:10.849966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.013 [2024-05-14 03:04:10.852526] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:25.013 [2024-05-14 03:04:10.852577] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.013 [2024-05-14 03:04:10.852592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.852602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.014 [2024-05-14 03:04:10.852613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.520 ms 00:17:25.014 [2024-05-14 03:04:10.852622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.866842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.866891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:25.014 [2024-05-14 03:04:10.866906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:17:25.014 [2024-05-14 03:04:10.866928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.868893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.868943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:25.014 [2024-05-14 03:04:10.868957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:17:25.014 [2024-05-14 03:04:10.868966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.870658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.870705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:25.014 [2024-05-14 03:04:10.870718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:17:25.014 [2024-05-14 03:04:10.870727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.870974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.871000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.014 [2024-05-14 03:04:10.871014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:25.014 [2024-05-14 03:04:10.871023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.888101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.888173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:25.014 [2024-05-14 03:04:10.888192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.033 ms 00:17:25.014 [2024-05-14 03:04:10.888202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.896233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:25.014 [2024-05-14 03:04:10.911007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.911075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.014 [2024-05-14 03:04:10.911092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.667 ms 00:17:25.014 [2024-05-14 03:04:10.911102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.911248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.911268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:25.014 [2024-05-14 03:04:10.911284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:25.014 [2024-05-14 03:04:10.911294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.911353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.911368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.014 [2024-05-14 03:04:10.911394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:25.014 [2024-05-14 03:04:10.911420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.913767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.913876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:25.014 [2024-05-14 03:04:10.913891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:17:25.014 [2024-05-14 03:04:10.913907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.913946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.913962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.014 [2024-05-14 03:04:10.913975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:25.014 [2024-05-14 03:04:10.913985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.914042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:25.014 [2024-05-14 03:04:10.914061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.914074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:25.014 [2024-05-14 03:04:10.914088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:25.014 [2024-05-14 03:04:10.914100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.917635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.917700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.014 [2024-05-14 03:04:10.917717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:17:25.014 [2024-05-14 03:04:10.917729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.917821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.014 [2024-05-14 03:04:10.917840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.014 [2024-05-14 03:04:10.917853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:25.014 [2024-05-14 03:04:10.917864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.014 [2024-05-14 03:04:10.918826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.014 [2024-05-14 03:04:10.920177] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.485 ms, result 0 00:17:25.014 [2024-05-14 03:04:10.920954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.014 [2024-05-14 03:04:10.928781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.471  Copying: 25/256 [MB] (25 MBps) Copying: 47/256 [MB] (21 MBps) Copying: 68/256 [MB] (21 MBps) Copying: 90/256 [MB] (21 MBps) Copying: 112/256 [MB] (21 MBps) Copying: 135/256 [MB] (23 MBps) Copying: 157/256 [MB] (22 MBps) Copying: 180/256 [MB] (22 MBps) Copying: 203/256 [MB] (22 MBps) Copying: 225/256 [MB] (22 MBps) Copying: 248/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-05-14 03:04:22.469609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:36.471 [2024-05-14 03:04:22.470908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.470951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:36.471 [2024-05-14 03:04:22.470974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:36.471 [2024-05-14 03:04:22.470989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.471039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:36.471 [2024-05-14 03:04:22.471602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.471631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:36.471 [2024-05-14 03:04:22.471657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:17:36.471 [2024-05-14 03:04:22.471671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.472102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.472126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:36.471 [2024-05-14 03:04:22.472162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:17:36.471 [2024-05-14 03:04:22.472181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.477085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.477161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:36.471 [2024-05-14 03:04:22.477181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:17:36.471 [2024-05-14 03:04:22.477204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.486841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.486893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:36.471 [2024-05-14 03:04:22.486923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.603 ms 00:17:36.471 [2024-05-14 03:04:22.486933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.488472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.488528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:36.471 [2024-05-14 03:04:22.488543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:17:36.471 [2024-05-14 03:04:22.488554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.491783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.491874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:36.471 [2024-05-14 03:04:22.491923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:17:36.471 [2024-05-14 03:04:22.491942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.492075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.492114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:36.471 [2024-05-14 03:04:22.492150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:36.471 [2024-05-14 03:04:22.492171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.494188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.494279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:36.471 [2024-05-14 03:04:22.494309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:17:36.471 [2024-05-14 03:04:22.494318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.471 [2024-05-14 03:04:22.496022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.471 [2024-05-14 03:04:22.496074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:36.471 [2024-05-14 03:04:22.496104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:17:36.471 [2024-05-14 03:04:22.496114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.734 [2024-05-14 03:04:22.497566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.734 [2024-05-14 03:04:22.497601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:36.734 [2024-05-14 03:04:22.497630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:17:36.734 [2024-05-14 03:04:22.497640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.734 [2024-05-14 03:04:22.498779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.734 [2024-05-14 03:04:22.498812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:36.734 [2024-05-14 03:04:22.498841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:17:36.734 [2024-05-14 03:04:22.498850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.734 [2024-05-14 03:04:22.498885] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:36.734 [2024-05-14 03:04:22.498907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.498993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:36.734 [2024-05-14 03:04:22.499441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.499997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.500010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.500022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.500033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.500044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.500056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:36.735 [2024-05-14 03:04:22.500076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:36.735 [2024-05-14 03:04:22.500087] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1c59e5a-0c75-4386-944d-643450163ef9 00:17:36.735 [2024-05-14 03:04:22.500099] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:36.735 [2024-05-14 03:04:22.500116] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:36.735 [2024-05-14 03:04:22.500137] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:36.735 [2024-05-14 03:04:22.500199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:36.735 [2024-05-14 03:04:22.500209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:36.735 [2024-05-14 03:04:22.500243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:36.735 [2024-05-14 03:04:22.500261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:36.735 [2024-05-14 03:04:22.500269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:36.735 [2024-05-14 03:04:22.500278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:36.735 [2024-05-14 03:04:22.500288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.735 [2024-05-14 03:04:22.500298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:36.735 [2024-05-14 03:04:22.500312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:17:36.735 [2024-05-14 03:04:22.500322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.501563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.735 [2024-05-14 03:04:22.501593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:36.735 [2024-05-14 03:04:22.501606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:17:36.735 [2024-05-14 03:04:22.501616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.501671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.735 [2024-05-14 03:04:22.501685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:36.735 [2024-05-14 03:04:22.501697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:36.735 [2024-05-14 03:04:22.501707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.506524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.506574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.735 [2024-05-14 03:04:22.506588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.506608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.506663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.506677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.735 [2024-05-14 03:04:22.506687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.506696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.506749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.506764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.735 [2024-05-14 03:04:22.506775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.506784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.506821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.506833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.735 [2024-05-14 03:04:22.506843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.506852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.514691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.514766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.735 [2024-05-14 03:04:22.514799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.514809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.735 [2024-05-14 03:04:22.518410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.735 [2024-05-14 03:04:22.518476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.735 [2024-05-14 03:04:22.518553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.735 [2024-05-14 03:04:22.518674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:36.735 [2024-05-14 03:04:22.518759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.735 [2024-05-14 03:04:22.518849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.518909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.735 [2024-05-14 03:04:22.518923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.735 [2024-05-14 03:04:22.518933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.735 [2024-05-14 03:04:22.518954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.735 [2024-05-14 03:04:22.519106] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.186 ms, result 0 00:17:36.735 00:17:36.735 00:17:36.735 03:04:22 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:37.301 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:37.301 03:04:23 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:37.301 03:04:23 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:37.301 03:04:23 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:37.301 03:04:23 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.301 03:04:23 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:37.558 03:04:23 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:37.558 03:04:23 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 90552 00:17:37.558 03:04:23 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90552 ']' 00:17:37.558 03:04:23 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90552 00:17:37.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90552) - No such process 00:17:37.558 Process with pid 90552 is not found 00:17:37.558 03:04:23 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 90552 is not found' 00:17:37.558 00:17:37.558 real 0m56.707s 00:17:37.558 user 1m15.758s 00:17:37.558 sys 0m5.914s 00:17:37.558 03:04:23 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.558 ************************************ 00:17:37.558 END TEST ftl_trim 00:17:37.558 ************************************ 00:17:37.558 03:04:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:37.558 03:04:23 ftl -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:37.558 03:04:23 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:37.558 03:04:23 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.558 03:04:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:37.558 ************************************ 00:17:37.558 START TEST ftl_restore 00:17:37.558 ************************************ 00:17:37.558 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:37.559 * Looking for test storage... 00:17:37.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.8Rl2kXcuPr 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90781 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90781 00:17:37.559 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 90781 ']' 00:17:37.559 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.559 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.559 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.559 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.559 03:04:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:37.559 03:04:23 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.816 [2024-05-14 03:04:23.679275] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:37.816 [2024-05-14 03:04:23.679491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90781 ] 00:17:37.816 [2024-05-14 03:04:23.829219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:38.074 [2024-05-14 03:04:23.854039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.074 [2024-05-14 03:04:23.896282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.638 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.638 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:17:38.638 03:04:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:38.638 03:04:24 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:38.638 03:04:24 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:38.638 03:04:24 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:38.638 03:04:24 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:38.638 03:04:24 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:38.896 03:04:24 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:38.896 03:04:24 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:38.896 03:04:24 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:38.896 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:38.896 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:38.896 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:38.896 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:38.896 03:04:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:39.154 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:39.154 { 00:17:39.154 "name": "nvme0n1", 00:17:39.154 "aliases": [ 00:17:39.154 "92543ad4-3f57-4ddb-bb1c-e5b3972e61b2" 00:17:39.154 ], 00:17:39.154 "product_name": "NVMe disk", 00:17:39.154 "block_size": 4096, 00:17:39.154 "num_blocks": 1310720, 00:17:39.154 "uuid": "92543ad4-3f57-4ddb-bb1c-e5b3972e61b2", 00:17:39.154 "assigned_rate_limits": { 00:17:39.154 "rw_ios_per_sec": 0, 00:17:39.154 "rw_mbytes_per_sec": 0, 00:17:39.154 "r_mbytes_per_sec": 0, 00:17:39.154 "w_mbytes_per_sec": 0 00:17:39.154 }, 00:17:39.154 "claimed": true, 00:17:39.154 "claim_type": "read_many_write_one", 00:17:39.154 "zoned": false, 00:17:39.154 "supported_io_types": { 00:17:39.154 "read": true, 00:17:39.154 "write": true, 00:17:39.154 "unmap": true, 00:17:39.154 "write_zeroes": true, 00:17:39.154 "flush": true, 00:17:39.154 "reset": true, 00:17:39.154 "compare": true, 00:17:39.154 "compare_and_write": false, 00:17:39.154 "abort": true, 00:17:39.154 "nvme_admin": true, 00:17:39.154 "nvme_io": true 00:17:39.154 }, 00:17:39.154 "driver_specific": { 00:17:39.154 "nvme": [ 00:17:39.154 { 00:17:39.154 "pci_address": "0000:00:11.0", 00:17:39.154 "trid": { 00:17:39.154 "trtype": "PCIe", 00:17:39.154 "traddr": "0000:00:11.0" 00:17:39.154 }, 00:17:39.154 "ctrlr_data": { 00:17:39.155 "cntlid": 0, 00:17:39.155 "vendor_id": "0x1b36", 00:17:39.155 "model_number": "QEMU NVMe Ctrl", 00:17:39.155 "serial_number": "12341", 00:17:39.155 "firmware_revision": "8.0.0", 00:17:39.155 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:39.155 "oacs": { 00:17:39.155 "security": 0, 00:17:39.155 "format": 1, 00:17:39.155 "firmware": 0, 00:17:39.155 "ns_manage": 1 00:17:39.155 }, 00:17:39.155 "multi_ctrlr": false, 00:17:39.155 "ana_reporting": false 00:17:39.155 }, 00:17:39.155 "vs": { 00:17:39.155 "nvme_version": "1.4" 00:17:39.155 }, 00:17:39.155 "ns_data": { 00:17:39.155 "id": 1, 00:17:39.155 "can_share": false 00:17:39.155 } 00:17:39.155 } 00:17:39.155 ], 00:17:39.155 "mp_policy": "active_passive" 00:17:39.155 } 00:17:39.155 } 00:17:39.155 ]' 00:17:39.155 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:39.155 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:39.155 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:39.155 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:39.155 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:39.155 03:04:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:17:39.155 03:04:25 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:39.155 03:04:25 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:39.155 03:04:25 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:39.413 03:04:25 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:39.413 03:04:25 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:39.671 03:04:25 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=57d2a042-85c4-47c9-95e1-7d47d9ed9144 00:17:39.671 03:04:25 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:39.671 03:04:25 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57d2a042-85c4-47c9-95e1-7d47d9ed9144 00:17:39.671 03:04:25 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:39.929 03:04:25 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=59b79526-5d7b-49a5-82ef-9332805b74f0 00:17:39.929 03:04:25 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 59b79526-5d7b-49a5-82ef-9332805b74f0 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:40.187 03:04:26 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.187 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.187 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:40.187 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:40.187 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:40.187 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.445 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:40.445 { 00:17:40.445 "name": "48cdb448-d7d9-494a-b5ec-36b254f2554a", 00:17:40.445 "aliases": [ 00:17:40.445 "lvs/nvme0n1p0" 00:17:40.445 ], 00:17:40.445 "product_name": "Logical Volume", 00:17:40.445 "block_size": 4096, 00:17:40.445 "num_blocks": 26476544, 00:17:40.445 "uuid": "48cdb448-d7d9-494a-b5ec-36b254f2554a", 00:17:40.445 "assigned_rate_limits": { 00:17:40.445 "rw_ios_per_sec": 0, 00:17:40.445 "rw_mbytes_per_sec": 0, 00:17:40.445 "r_mbytes_per_sec": 0, 00:17:40.445 "w_mbytes_per_sec": 0 00:17:40.445 }, 00:17:40.445 "claimed": false, 00:17:40.445 "zoned": false, 00:17:40.445 "supported_io_types": { 00:17:40.445 "read": true, 00:17:40.445 "write": true, 00:17:40.445 "unmap": true, 00:17:40.445 "write_zeroes": true, 00:17:40.445 "flush": false, 00:17:40.445 "reset": true, 00:17:40.445 "compare": false, 00:17:40.445 "compare_and_write": false, 00:17:40.445 "abort": false, 00:17:40.445 "nvme_admin": false, 00:17:40.445 "nvme_io": false 00:17:40.445 }, 00:17:40.445 "driver_specific": { 00:17:40.445 "lvol": { 00:17:40.445 "lvol_store_uuid": "59b79526-5d7b-49a5-82ef-9332805b74f0", 00:17:40.445 "base_bdev": "nvme0n1", 00:17:40.445 "thin_provision": true, 00:17:40.445 "num_allocated_clusters": 0, 00:17:40.445 "snapshot": false, 00:17:40.445 "clone": false, 00:17:40.445 "esnap_clone": false 00:17:40.445 } 00:17:40.445 } 00:17:40.445 } 00:17:40.445 ]' 00:17:40.445 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:40.445 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:40.445 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:40.703 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:40.703 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:40.703 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:40.703 03:04:26 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:40.703 03:04:26 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:40.703 03:04:26 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:40.961 03:04:26 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:40.961 03:04:26 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:40.961 03:04:26 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.961 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:40.961 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:40.961 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:40.961 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:40.961 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:41.220 03:04:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:41.220 { 00:17:41.220 "name": "48cdb448-d7d9-494a-b5ec-36b254f2554a", 00:17:41.220 "aliases": [ 00:17:41.220 "lvs/nvme0n1p0" 00:17:41.220 ], 00:17:41.220 "product_name": "Logical Volume", 00:17:41.220 "block_size": 4096, 00:17:41.220 "num_blocks": 26476544, 00:17:41.220 "uuid": "48cdb448-d7d9-494a-b5ec-36b254f2554a", 00:17:41.220 "assigned_rate_limits": { 00:17:41.220 "rw_ios_per_sec": 0, 00:17:41.220 "rw_mbytes_per_sec": 0, 00:17:41.220 "r_mbytes_per_sec": 0, 00:17:41.220 "w_mbytes_per_sec": 0 00:17:41.220 }, 00:17:41.220 "claimed": false, 00:17:41.220 "zoned": false, 00:17:41.220 "supported_io_types": { 00:17:41.220 "read": true, 00:17:41.220 "write": true, 00:17:41.220 "unmap": true, 00:17:41.220 "write_zeroes": true, 00:17:41.220 "flush": false, 00:17:41.220 "reset": true, 00:17:41.220 "compare": false, 00:17:41.220 "compare_and_write": false, 00:17:41.220 "abort": false, 00:17:41.220 "nvme_admin": false, 00:17:41.220 "nvme_io": false 00:17:41.220 }, 00:17:41.220 "driver_specific": { 00:17:41.220 "lvol": { 00:17:41.220 "lvol_store_uuid": "59b79526-5d7b-49a5-82ef-9332805b74f0", 00:17:41.220 "base_bdev": "nvme0n1", 00:17:41.220 "thin_provision": true, 00:17:41.220 "num_allocated_clusters": 0, 00:17:41.220 "snapshot": false, 00:17:41.220 "clone": false, 00:17:41.220 "esnap_clone": false 00:17:41.220 } 00:17:41.220 } 00:17:41.220 } 00:17:41.220 ]' 00:17:41.220 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:41.220 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:41.220 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:41.220 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:41.220 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:41.220 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:41.220 03:04:27 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:41.220 03:04:27 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:41.478 03:04:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:41.478 03:04:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:41.478 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:41.478 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:41.478 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:41.478 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:41.478 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48cdb448-d7d9-494a-b5ec-36b254f2554a 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:41.735 { 00:17:41.735 "name": "48cdb448-d7d9-494a-b5ec-36b254f2554a", 00:17:41.735 "aliases": [ 00:17:41.735 "lvs/nvme0n1p0" 00:17:41.735 ], 00:17:41.735 "product_name": "Logical Volume", 00:17:41.735 "block_size": 4096, 00:17:41.735 "num_blocks": 26476544, 00:17:41.735 "uuid": "48cdb448-d7d9-494a-b5ec-36b254f2554a", 00:17:41.735 "assigned_rate_limits": { 00:17:41.735 "rw_ios_per_sec": 0, 00:17:41.735 "rw_mbytes_per_sec": 0, 00:17:41.735 "r_mbytes_per_sec": 0, 00:17:41.735 "w_mbytes_per_sec": 0 00:17:41.735 }, 00:17:41.735 "claimed": false, 00:17:41.735 "zoned": false, 00:17:41.735 "supported_io_types": { 00:17:41.735 "read": true, 00:17:41.735 "write": true, 00:17:41.735 "unmap": true, 00:17:41.735 "write_zeroes": true, 00:17:41.735 "flush": false, 00:17:41.735 "reset": true, 00:17:41.735 "compare": false, 00:17:41.735 "compare_and_write": false, 00:17:41.735 "abort": false, 00:17:41.735 "nvme_admin": false, 00:17:41.735 "nvme_io": false 00:17:41.735 }, 00:17:41.735 "driver_specific": { 00:17:41.735 "lvol": { 00:17:41.735 "lvol_store_uuid": "59b79526-5d7b-49a5-82ef-9332805b74f0", 00:17:41.735 "base_bdev": "nvme0n1", 00:17:41.735 "thin_provision": true, 00:17:41.735 "num_allocated_clusters": 0, 00:17:41.735 "snapshot": false, 00:17:41.735 "clone": false, 00:17:41.735 "esnap_clone": false 00:17:41.735 } 00:17:41.735 } 00:17:41.735 } 00:17:41.735 ]' 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:41.735 03:04:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 48cdb448-d7d9-494a-b5ec-36b254f2554a --l2p_dram_limit 10' 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:41.735 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:41.735 03:04:27 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 48cdb448-d7d9-494a-b5ec-36b254f2554a --l2p_dram_limit 10 -c nvc0n1p0 00:17:41.994 [2024-05-14 03:04:27.900893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.900991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.994 [2024-05-14 03:04:27.901013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.994 [2024-05-14 03:04:27.901036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.901114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.901137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.994 [2024-05-14 03:04:27.901181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:41.994 [2024-05-14 03:04:27.901199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.901233] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.994 [2024-05-14 03:04:27.901602] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.994 [2024-05-14 03:04:27.901645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.901663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.994 [2024-05-14 03:04:27.901679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:41.994 [2024-05-14 03:04:27.901692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.901874] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 212fbdc4-0917-4fb4-873d-5428fad743dd 00:17:41.994 [2024-05-14 03:04:27.902941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.902977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:41.994 [2024-05-14 03:04:27.903004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:41.994 [2024-05-14 03:04:27.903017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.907743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.907806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.994 [2024-05-14 03:04:27.907841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:17:41.994 [2024-05-14 03:04:27.907865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.907990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.908010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.994 [2024-05-14 03:04:27.908027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:41.994 [2024-05-14 03:04:27.908042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.908112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.908130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.994 [2024-05-14 03:04:27.908164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:41.994 [2024-05-14 03:04:27.908176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.908213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.994 [2024-05-14 03:04:27.909773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.909829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.994 [2024-05-14 03:04:27.909878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:17:41.994 [2024-05-14 03:04:27.909891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.909932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.909952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.994 [2024-05-14 03:04:27.909964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:41.994 [2024-05-14 03:04:27.909991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.910036] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:41.994 [2024-05-14 03:04:27.910207] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:41.994 [2024-05-14 03:04:27.910235] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.994 [2024-05-14 03:04:27.910257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:41.994 [2024-05-14 03:04:27.910273] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.994 [2024-05-14 03:04:27.910289] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.994 [2024-05-14 03:04:27.910302] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:41.994 [2024-05-14 03:04:27.910315] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.994 [2024-05-14 03:04:27.910327] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:41.994 [2024-05-14 03:04:27.910340] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:41.994 [2024-05-14 03:04:27.910355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.910368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.994 [2024-05-14 03:04:27.910381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:41.994 [2024-05-14 03:04:27.910406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.910484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.994 [2024-05-14 03:04:27.910512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.994 [2024-05-14 03:04:27.910525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:41.994 [2024-05-14 03:04:27.910538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.994 [2024-05-14 03:04:27.910650] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.994 [2024-05-14 03:04:27.910689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.994 [2024-05-14 03:04:27.910705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.994 [2024-05-14 03:04:27.910719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.994 [2024-05-14 03:04:27.910731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.994 [2024-05-14 03:04:27.910744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.994 [2024-05-14 03:04:27.910755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:41.994 [2024-05-14 03:04:27.910768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.994 [2024-05-14 03:04:27.910779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:41.994 [2024-05-14 03:04:27.910791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.994 [2024-05-14 03:04:27.910802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.994 [2024-05-14 03:04:27.910815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:41.994 [2024-05-14 03:04:27.910826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.994 [2024-05-14 03:04:27.910841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.994 [2024-05-14 03:04:27.910852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:41.994 [2024-05-14 03:04:27.910865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.994 [2024-05-14 03:04:27.910877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.994 [2024-05-14 03:04:27.910889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:41.994 [2024-05-14 03:04:27.910900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.994 [2024-05-14 03:04:27.910913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:41.994 [2024-05-14 03:04:27.910923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:41.994 [2024-05-14 03:04:27.910936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:41.994 [2024-05-14 03:04:27.910947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.994 [2024-05-14 03:04:27.910960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:41.995 [2024-05-14 03:04:27.910970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.995 [2024-05-14 03:04:27.910982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.995 [2024-05-14 03:04:27.910993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:41.995 [2024-05-14 03:04:27.911005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.995 [2024-05-14 03:04:27.911016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.995 [2024-05-14 03:04:27.911032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:41.995 [2024-05-14 03:04:27.911043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.995 [2024-05-14 03:04:27.911055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.995 [2024-05-14 03:04:27.911066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:41.995 [2024-05-14 03:04:27.911078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.995 [2024-05-14 03:04:27.911089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.995 [2024-05-14 03:04:27.911101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:41.995 [2024-05-14 03:04:27.911112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.995 [2024-05-14 03:04:27.911125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.995 [2024-05-14 03:04:27.911156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:41.995 [2024-05-14 03:04:27.911171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.995 [2024-05-14 03:04:27.911182] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.995 [2024-05-14 03:04:27.911197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.995 [2024-05-14 03:04:27.911208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.995 [2024-05-14 03:04:27.911232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.995 [2024-05-14 03:04:27.911244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.995 [2024-05-14 03:04:27.911259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.995 [2024-05-14 03:04:27.911270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.995 [2024-05-14 03:04:27.911284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.995 [2024-05-14 03:04:27.911295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.995 [2024-05-14 03:04:27.911308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.995 [2024-05-14 03:04:27.911320] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.995 [2024-05-14 03:04:27.911336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.995 [2024-05-14 03:04:27.911350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:41.995 [2024-05-14 03:04:27.911365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:41.995 [2024-05-14 03:04:27.911377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:41.995 [2024-05-14 03:04:27.911390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:41.995 [2024-05-14 03:04:27.911402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:41.995 [2024-05-14 03:04:27.911415] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:41.995 [2024-05-14 03:04:27.911427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:41.995 [2024-05-14 03:04:27.911442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:41.995 [2024-05-14 03:04:27.911454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:41.995 [2024-05-14 03:04:27.911470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:41.995 [2024-05-14 03:04:27.911482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:41.995 [2024-05-14 03:04:27.911496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:41.995 [2024-05-14 03:04:27.911508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:41.995 [2024-05-14 03:04:27.911521] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.995 [2024-05-14 03:04:27.911534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.995 [2024-05-14 03:04:27.911552] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.995 [2024-05-14 03:04:27.911564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.995 [2024-05-14 03:04:27.911578] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.995 [2024-05-14 03:04:27.911590] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.995 [2024-05-14 03:04:27.911605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.911617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.995 [2024-05-14 03:04:27.911631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:17:41.995 [2024-05-14 03:04:27.911646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.917959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.918016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.995 [2024-05-14 03:04:27.918037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:17:41.995 [2024-05-14 03:04:27.918049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.918162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.918182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.995 [2024-05-14 03:04:27.918195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:41.995 [2024-05-14 03:04:27.918206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.927448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.927498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.995 [2024-05-14 03:04:27.927519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.174 ms 00:17:41.995 [2024-05-14 03:04:27.927532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.927582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.927598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.995 [2024-05-14 03:04:27.927613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.995 [2024-05-14 03:04:27.927625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.928031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.928061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.995 [2024-05-14 03:04:27.928081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:17:41.995 [2024-05-14 03:04:27.928093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.928287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.928310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.995 [2024-05-14 03:04:27.928325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:41.995 [2024-05-14 03:04:27.928347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.934126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.934188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.995 [2024-05-14 03:04:27.934224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.750 ms 00:17:41.995 [2024-05-14 03:04:27.934235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.943225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:41.995 [2024-05-14 03:04:27.946066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.946119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.995 [2024-05-14 03:04:27.946175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.764 ms 00:17:41.995 [2024-05-14 03:04:27.946190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.995069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.995 [2024-05-14 03:04:27.995177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:41.995 [2024-05-14 03:04:27.995203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.845 ms 00:17:41.995 [2024-05-14 03:04:27.995217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.995 [2024-05-14 03:04:27.995275] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:41.995 [2024-05-14 03:04:27.995299] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:44.531 [2024-05-14 03:04:30.092700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.092813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:44.531 [2024-05-14 03:04:30.092835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2097.435 ms 00:17:44.531 [2024-05-14 03:04:30.092849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.093111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.093189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.531 [2024-05-14 03:04:30.093210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:44.531 [2024-05-14 03:04:30.093225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.096811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.096887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:44.531 [2024-05-14 03:04:30.096904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:17:44.531 [2024-05-14 03:04:30.096929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.100066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.100113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:44.531 [2024-05-14 03:04:30.100143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:17:44.531 [2024-05-14 03:04:30.100160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.100437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.100476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.531 [2024-05-14 03:04:30.100491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:17:44.531 [2024-05-14 03:04:30.100504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.121852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.121931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:44.531 [2024-05-14 03:04:30.121949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.307 ms 00:17:44.531 [2024-05-14 03:04:30.121961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.126089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.126171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:44.531 [2024-05-14 03:04:30.126191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:17:44.531 [2024-05-14 03:04:30.126206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.128120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.128189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:44.531 [2024-05-14 03:04:30.128237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:17:44.531 [2024-05-14 03:04:30.128249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.132296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.132355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.531 [2024-05-14 03:04:30.132371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:17:44.531 [2024-05-14 03:04:30.132384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.132432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.132452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.531 [2024-05-14 03:04:30.132465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.531 [2024-05-14 03:04:30.132477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.132611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.531 [2024-05-14 03:04:30.132631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.531 [2024-05-14 03:04:30.132644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:44.531 [2024-05-14 03:04:30.132662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.531 [2024-05-14 03:04:30.133840] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2232.452 ms, result 0 00:17:44.531 { 00:17:44.531 "name": "ftl0", 00:17:44.531 "uuid": "212fbdc4-0917-4fb4-873d-5428fad743dd" 00:17:44.531 } 00:17:44.531 03:04:30 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:44.531 03:04:30 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:44.531 03:04:30 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:44.531 03:04:30 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:44.790 [2024-05-14 03:04:30.657198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.657263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.790 [2024-05-14 03:04:30.657303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.790 [2024-05-14 03:04:30.657314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.657350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.790 [2024-05-14 03:04:30.657795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.657827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.790 [2024-05-14 03:04:30.657841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:17:44.790 [2024-05-14 03:04:30.657854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.658147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.658178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.790 [2024-05-14 03:04:30.658192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:44.790 [2024-05-14 03:04:30.658208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.661316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.661365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.790 [2024-05-14 03:04:30.661401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:17:44.790 [2024-05-14 03:04:30.661413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.667088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.667158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:44.790 [2024-05-14 03:04:30.667173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.653 ms 00:17:44.790 [2024-05-14 03:04:30.667188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.668826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.668931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.790 [2024-05-14 03:04:30.668946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:17:44.790 [2024-05-14 03:04:30.668958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.673000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.673092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.790 [2024-05-14 03:04:30.673108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.004 ms 00:17:44.790 [2024-05-14 03:04:30.673120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.673271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.673297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.790 [2024-05-14 03:04:30.673312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:44.790 [2024-05-14 03:04:30.673340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.675120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.675225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:44.790 [2024-05-14 03:04:30.675240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:17:44.790 [2024-05-14 03:04:30.675252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.676898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.676982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:44.790 [2024-05-14 03:04:30.677012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:17:44.790 [2024-05-14 03:04:30.677023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.678418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.678501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.790 [2024-05-14 03:04:30.678515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:17:44.790 [2024-05-14 03:04:30.678542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.679863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.790 [2024-05-14 03:04:30.679924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.790 [2024-05-14 03:04:30.679940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:17:44.790 [2024-05-14 03:04:30.679956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.790 [2024-05-14 03:04:30.679998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.790 [2024-05-14 03:04:30.680036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.680990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.681003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.681014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.681028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.681039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.681052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.790 [2024-05-14 03:04:30.681063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.791 [2024-05-14 03:04:30.681408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.791 [2024-05-14 03:04:30.681430] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 212fbdc4-0917-4fb4-873d-5428fad743dd 00:17:44.791 [2024-05-14 03:04:30.681444] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.791 [2024-05-14 03:04:30.681454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.791 [2024-05-14 03:04:30.681466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.791 [2024-05-14 03:04:30.681477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.791 [2024-05-14 03:04:30.681489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.791 [2024-05-14 03:04:30.681503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.791 [2024-05-14 03:04:30.681517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.791 [2024-05-14 03:04:30.681527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.791 [2024-05-14 03:04:30.681539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.791 [2024-05-14 03:04:30.681550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.791 [2024-05-14 03:04:30.681563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.791 [2024-05-14 03:04:30.681575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:17:44.791 [2024-05-14 03:04:30.681588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.682911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.791 [2024-05-14 03:04:30.682949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.791 [2024-05-14 03:04:30.682962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:17:44.791 [2024-05-14 03:04:30.682977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.683030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.791 [2024-05-14 03:04:30.683047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.791 [2024-05-14 03:04:30.683059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:44.791 [2024-05-14 03:04:30.683071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.688071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.688138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.791 [2024-05-14 03:04:30.688203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.688233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.688315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.688333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.791 [2024-05-14 03:04:30.688344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.688356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.688496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.688536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.791 [2024-05-14 03:04:30.688550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.688565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.688591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.688607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.791 [2024-05-14 03:04:30.688618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.688630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.697042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.697130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.791 [2024-05-14 03:04:30.697210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.697233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.700791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.700847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.791 [2024-05-14 03:04:30.700862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.700874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.700930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.700959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.791 [2024-05-14 03:04:30.700971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.700982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.701043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.701060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.791 [2024-05-14 03:04:30.701071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.701083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.701192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.701216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.791 [2024-05-14 03:04:30.701228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.701240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.701287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.701315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.791 [2024-05-14 03:04:30.701327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.701338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.701380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.701428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.791 [2024-05-14 03:04:30.701444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.701477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.701550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.791 [2024-05-14 03:04:30.701584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.791 [2024-05-14 03:04:30.701598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.791 [2024-05-14 03:04:30.701610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.791 [2024-05-14 03:04:30.701755] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.535 ms, result 0 00:17:44.791 true 00:17:44.791 03:04:30 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90781 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90781 ']' 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90781 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90781 00:17:44.791 killing process with pid 90781 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90781' 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 90781 00:17:44.791 03:04:30 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 90781 00:17:48.128 03:04:33 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:52.313 262144+0 records in 00:17:52.313 262144+0 records out 00:17:52.313 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.26024 s, 252 MB/s 00:17:52.313 03:04:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:54.218 03:04:39 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.218 [2024-05-14 03:04:40.059024] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:17:54.218 [2024-05-14 03:04:40.059244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90991 ] 00:17:54.218 [2024-05-14 03:04:40.207701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:54.218 [2024-05-14 03:04:40.231319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.479 [2024-05-14 03:04:40.274170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.479 [2024-05-14 03:04:40.364935] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.479 [2024-05-14 03:04:40.365034] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.740 [2024-05-14 03:04:40.513859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.514084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.740 [2024-05-14 03:04:40.514112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.740 [2024-05-14 03:04:40.514124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.514241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.514268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.740 [2024-05-14 03:04:40.514280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:54.740 [2024-05-14 03:04:40.514291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.514333] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.740 [2024-05-14 03:04:40.514630] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.740 [2024-05-14 03:04:40.514656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.514667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.740 [2024-05-14 03:04:40.514683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:17:54.740 [2024-05-14 03:04:40.514693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.515713] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.740 [2024-05-14 03:04:40.518060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.518097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.740 [2024-05-14 03:04:40.518144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:17:54.740 [2024-05-14 03:04:40.518172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.518234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.518252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.740 [2024-05-14 03:04:40.518264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:54.740 [2024-05-14 03:04:40.518285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.522517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.522557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.740 [2024-05-14 03:04:40.522593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.164 ms 00:17:54.740 [2024-05-14 03:04:40.522612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.522690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.522711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.740 [2024-05-14 03:04:40.522722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:54.740 [2024-05-14 03:04:40.522740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.522806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.522822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.740 [2024-05-14 03:04:40.522833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.740 [2024-05-14 03:04:40.522849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.522883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.740 [2024-05-14 03:04:40.524131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.524159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.740 [2024-05-14 03:04:40.524207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:17:54.740 [2024-05-14 03:04:40.524221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.524260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.740 [2024-05-14 03:04:40.524275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.740 [2024-05-14 03:04:40.524289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:54.740 [2024-05-14 03:04:40.524306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.740 [2024-05-14 03:04:40.524343] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.740 [2024-05-14 03:04:40.524372] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:54.740 [2024-05-14 03:04:40.524411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.740 [2024-05-14 03:04:40.524436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:54.741 [2024-05-14 03:04:40.524518] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:54.741 [2024-05-14 03:04:40.524536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.741 [2024-05-14 03:04:40.524554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:54.741 [2024-05-14 03:04:40.524591] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.741 [2024-05-14 03:04:40.524604] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.741 [2024-05-14 03:04:40.524616] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.741 [2024-05-14 03:04:40.524626] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.741 [2024-05-14 03:04:40.524636] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:54.741 [2024-05-14 03:04:40.524646] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:54.741 [2024-05-14 03:04:40.524657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.741 [2024-05-14 03:04:40.524668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.741 [2024-05-14 03:04:40.524679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:54.741 [2024-05-14 03:04:40.524689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.741 [2024-05-14 03:04:40.524761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.741 [2024-05-14 03:04:40.524783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.741 [2024-05-14 03:04:40.524794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:54.741 [2024-05-14 03:04:40.524808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.741 [2024-05-14 03:04:40.524890] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.741 [2024-05-14 03:04:40.524912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.741 [2024-05-14 03:04:40.524925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.741 [2024-05-14 03:04:40.524947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.741 [2024-05-14 03:04:40.524958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.741 [2024-05-14 03:04:40.524968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.741 [2024-05-14 03:04:40.524978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.741 [2024-05-14 03:04:40.524988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.741 [2024-05-14 03:04:40.524998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.741 [2024-05-14 03:04:40.525018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.741 [2024-05-14 03:04:40.525028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.741 [2024-05-14 03:04:40.525038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.741 [2024-05-14 03:04:40.525049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.741 [2024-05-14 03:04:40.525059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:54.741 [2024-05-14 03:04:40.525085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.741 [2024-05-14 03:04:40.525106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:54.741 [2024-05-14 03:04:40.525116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:54.741 [2024-05-14 03:04:40.525137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:54.741 [2024-05-14 03:04:40.525194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.741 [2024-05-14 03:04:40.525217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.741 [2024-05-14 03:04:40.525247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.741 [2024-05-14 03:04:40.525277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.741 [2024-05-14 03:04:40.525313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.741 [2024-05-14 03:04:40.525343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.741 [2024-05-14 03:04:40.525362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.741 [2024-05-14 03:04:40.525372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:54.741 [2024-05-14 03:04:40.525383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.741 [2024-05-14 03:04:40.525393] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.741 [2024-05-14 03:04:40.525407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.741 [2024-05-14 03:04:40.525418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.741 [2024-05-14 03:04:40.525439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.741 [2024-05-14 03:04:40.525450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.741 [2024-05-14 03:04:40.525460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.741 [2024-05-14 03:04:40.525472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.741 [2024-05-14 03:04:40.525483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.741 [2024-05-14 03:04:40.525493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.741 [2024-05-14 03:04:40.525504] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.741 [2024-05-14 03:04:40.525532] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.741 [2024-05-14 03:04:40.525543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.741 [2024-05-14 03:04:40.525554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:54.741 [2024-05-14 03:04:40.525579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:54.741 [2024-05-14 03:04:40.525591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:54.741 [2024-05-14 03:04:40.525601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:54.741 [2024-05-14 03:04:40.525612] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:54.741 [2024-05-14 03:04:40.525622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:54.741 [2024-05-14 03:04:40.525633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:54.741 [2024-05-14 03:04:40.525643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:54.741 [2024-05-14 03:04:40.525654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:54.741 [2024-05-14 03:04:40.525664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:54.741 [2024-05-14 03:04:40.525678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:54.741 [2024-05-14 03:04:40.525690] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:54.741 [2024-05-14 03:04:40.525701] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.741 [2024-05-14 03:04:40.525713] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.741 [2024-05-14 03:04:40.525724] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.741 [2024-05-14 03:04:40.525735] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.741 [2024-05-14 03:04:40.525746] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.741 [2024-05-14 03:04:40.525757] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.741 [2024-05-14 03:04:40.525769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.741 [2024-05-14 03:04:40.525790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.741 [2024-05-14 03:04:40.525801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:17:54.741 [2024-05-14 03:04:40.525816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.741 [2024-05-14 03:04:40.531696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.741 [2024-05-14 03:04:40.531728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.741 [2024-05-14 03:04:40.531743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.832 ms 00:17:54.741 [2024-05-14 03:04:40.531765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.741 [2024-05-14 03:04:40.531852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.741 [2024-05-14 03:04:40.531865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.741 [2024-05-14 03:04:40.531902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:54.741 [2024-05-14 03:04:40.531930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.547436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.547529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.742 [2024-05-14 03:04:40.547564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.449 ms 00:17:54.742 [2024-05-14 03:04:40.547580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.547630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.547647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.742 [2024-05-14 03:04:40.547659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.742 [2024-05-14 03:04:40.547675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.548072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.548094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.742 [2024-05-14 03:04:40.548107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:54.742 [2024-05-14 03:04:40.548118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.548305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.548330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.742 [2024-05-14 03:04:40.548341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:54.742 [2024-05-14 03:04:40.548352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.553614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.553651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.742 [2024-05-14 03:04:40.553688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.232 ms 00:17:54.742 [2024-05-14 03:04:40.553700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.556062] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:54.742 [2024-05-14 03:04:40.556111] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.742 [2024-05-14 03:04:40.556158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.556172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.742 [2024-05-14 03:04:40.556184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:17:54.742 [2024-05-14 03:04:40.556195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.570242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.570285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.742 [2024-05-14 03:04:40.570318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.989 ms 00:17:54.742 [2024-05-14 03:04:40.570338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.572056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.572094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.742 [2024-05-14 03:04:40.572125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:17:54.742 [2024-05-14 03:04:40.572135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.573740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.573797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.742 [2024-05-14 03:04:40.573812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:17:54.742 [2024-05-14 03:04:40.573822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.574027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.574046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.742 [2024-05-14 03:04:40.574057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:54.742 [2024-05-14 03:04:40.574068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.592041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.592113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.742 [2024-05-14 03:04:40.592161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.952 ms 00:17:54.742 [2024-05-14 03:04:40.592177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.599556] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:54.742 [2024-05-14 03:04:40.601800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.601838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.742 [2024-05-14 03:04:40.601869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.548 ms 00:17:54.742 [2024-05-14 03:04:40.601879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.601961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.601986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.742 [2024-05-14 03:04:40.601998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.742 [2024-05-14 03:04:40.602008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.602079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.602096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.742 [2024-05-14 03:04:40.602107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:54.742 [2024-05-14 03:04:40.602121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.604142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.604250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:54.742 [2024-05-14 03:04:40.604305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:17:54.742 [2024-05-14 03:04:40.604316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.604351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.604366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.742 [2024-05-14 03:04:40.604378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.742 [2024-05-14 03:04:40.604388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.604450] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.742 [2024-05-14 03:04:40.604482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.604520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.742 [2024-05-14 03:04:40.604532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:54.742 [2024-05-14 03:04:40.604542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.608101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.608303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.742 [2024-05-14 03:04:40.608419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:17:54.742 [2024-05-14 03:04:40.608485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.608761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.742 [2024-05-14 03:04:40.608829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.742 [2024-05-14 03:04:40.608896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:54.742 [2024-05-14 03:04:40.608981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.742 [2024-05-14 03:04:40.610227] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.865 ms, result 0 00:18:36.844  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 96/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (22 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 213/1024 [MB] (24 MBps) Copying: 237/1024 [MB] (23 MBps) Copying: 262/1024 [MB] (24 MBps) Copying: 286/1024 [MB] (24 MBps) Copying: 310/1024 [MB] (24 MBps) Copying: 336/1024 [MB] (25 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 385/1024 [MB] (24 MBps) Copying: 410/1024 [MB] (24 MBps) Copying: 434/1024 [MB] (24 MBps) Copying: 459/1024 [MB] (24 MBps) Copying: 483/1024 [MB] (23 MBps) Copying: 508/1024 [MB] (24 MBps) Copying: 532/1024 [MB] (24 MBps) Copying: 557/1024 [MB] (24 MBps) Copying: 581/1024 [MB] (24 MBps) Copying: 606/1024 [MB] (24 MBps) Copying: 630/1024 [MB] (24 MBps) Copying: 655/1024 [MB] (24 MBps) Copying: 679/1024 [MB] (24 MBps) Copying: 703/1024 [MB] (24 MBps) Copying: 727/1024 [MB] (23 MBps) Copying: 752/1024 [MB] (24 MBps) Copying: 776/1024 [MB] (24 MBps) Copying: 801/1024 [MB] (24 MBps) Copying: 826/1024 [MB] (24 MBps) Copying: 850/1024 [MB] (24 MBps) Copying: 874/1024 [MB] (24 MBps) Copying: 899/1024 [MB] (24 MBps) Copying: 922/1024 [MB] (23 MBps) Copying: 946/1024 [MB] (23 MBps) Copying: 970/1024 [MB] (23 MBps) Copying: 994/1024 [MB] (24 MBps) Copying: 1018/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-14 03:05:22.850625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.844 [2024-05-14 03:05:22.850824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:36.844 [2024-05-14 03:05:22.850955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:36.844 [2024-05-14 03:05:22.851007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.844 [2024-05-14 03:05:22.851150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.844 [2024-05-14 03:05:22.851714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.844 [2024-05-14 03:05:22.851854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:36.844 [2024-05-14 03:05:22.851994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:18:36.844 [2024-05-14 03:05:22.852163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.844 [2024-05-14 03:05:22.853807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.844 [2024-05-14 03:05:22.853872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:36.844 [2024-05-14 03:05:22.853890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:18:36.844 [2024-05-14 03:05:22.853901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.844 [2024-05-14 03:05:22.868530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.844 [2024-05-14 03:05:22.868578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:36.844 [2024-05-14 03:05:22.868610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.608 ms 00:18:36.844 [2024-05-14 03:05:22.868633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.104 [2024-05-14 03:05:22.875234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.104 [2024-05-14 03:05:22.875270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:37.104 [2024-05-14 03:05:22.875283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.557 ms 00:18:37.104 [2024-05-14 03:05:22.875293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.104 [2024-05-14 03:05:22.876605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.104 [2024-05-14 03:05:22.876656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:37.104 [2024-05-14 03:05:22.876686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:18:37.104 [2024-05-14 03:05:22.876697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.104 [2024-05-14 03:05:22.879808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.104 [2024-05-14 03:05:22.879875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:37.104 [2024-05-14 03:05:22.879906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:18:37.104 [2024-05-14 03:05:22.879916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.104 [2024-05-14 03:05:22.880060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.104 [2024-05-14 03:05:22.880081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:37.104 [2024-05-14 03:05:22.880106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:37.104 [2024-05-14 03:05:22.880125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.104 [2024-05-14 03:05:22.882224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.105 [2024-05-14 03:05:22.882297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:37.105 [2024-05-14 03:05:22.882327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:18:37.105 [2024-05-14 03:05:22.882337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.105 [2024-05-14 03:05:22.883915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.105 [2024-05-14 03:05:22.883950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:37.105 [2024-05-14 03:05:22.884004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:18:37.105 [2024-05-14 03:05:22.884015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.105 [2024-05-14 03:05:22.885334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.105 [2024-05-14 03:05:22.885385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:37.105 [2024-05-14 03:05:22.885399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:18:37.105 [2024-05-14 03:05:22.885424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.105 [2024-05-14 03:05:22.886608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.105 [2024-05-14 03:05:22.886658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:37.105 [2024-05-14 03:05:22.886672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:18:37.105 [2024-05-14 03:05:22.886682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.105 [2024-05-14 03:05:22.886714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:37.105 [2024-05-14 03:05:22.886735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.886991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:37.105 [2024-05-14 03:05:22.887609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:37.106 [2024-05-14 03:05:22.887862] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:37.106 [2024-05-14 03:05:22.887872] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 212fbdc4-0917-4fb4-873d-5428fad743dd 00:18:37.106 [2024-05-14 03:05:22.887894] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:37.106 [2024-05-14 03:05:22.887904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:37.106 [2024-05-14 03:05:22.887914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:37.106 [2024-05-14 03:05:22.887928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:37.106 [2024-05-14 03:05:22.887938] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:37.106 [2024-05-14 03:05:22.887948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:37.106 [2024-05-14 03:05:22.887958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:37.106 [2024-05-14 03:05:22.887996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:37.106 [2024-05-14 03:05:22.888006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:37.106 [2024-05-14 03:05:22.888018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.106 [2024-05-14 03:05:22.888030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:37.106 [2024-05-14 03:05:22.888042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:18:37.106 [2024-05-14 03:05:22.888053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.889484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.106 [2024-05-14 03:05:22.889518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:37.106 [2024-05-14 03:05:22.889532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:18:37.106 [2024-05-14 03:05:22.889544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.889600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.106 [2024-05-14 03:05:22.889615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:37.106 [2024-05-14 03:05:22.889627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:37.106 [2024-05-14 03:05:22.889650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.894709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.894746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.106 [2024-05-14 03:05:22.894765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.894775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.894827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.894840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.106 [2024-05-14 03:05:22.894850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.894860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.894905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.894925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.106 [2024-05-14 03:05:22.894936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.894954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.894973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.894985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.106 [2024-05-14 03:05:22.894994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.895013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.902614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.902671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.106 [2024-05-14 03:05:22.902704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.902714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.106 [2024-05-14 03:05:22.906340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:37.106 [2024-05-14 03:05:22.906440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:37.106 [2024-05-14 03:05:22.906496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:37.106 [2024-05-14 03:05:22.906626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:37.106 [2024-05-14 03:05:22.906708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:37.106 [2024-05-14 03:05:22.906789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.906846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.106 [2024-05-14 03:05:22.906860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:37.106 [2024-05-14 03:05:22.906870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.106 [2024-05-14 03:05:22.906880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.106 [2024-05-14 03:05:22.907006] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.349 ms, result 0 00:18:37.691 00:18:37.691 00:18:37.691 03:05:23 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:37.691 [2024-05-14 03:05:23.603804] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:18:37.691 [2024-05-14 03:05:23.603995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91432 ] 00:18:37.955 [2024-05-14 03:05:23.753644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:37.955 [2024-05-14 03:05:23.774947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.955 [2024-05-14 03:05:23.809598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.955 [2024-05-14 03:05:23.890653] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:37.955 [2024-05-14 03:05:23.890750] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.215 [2024-05-14 03:05:24.040854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.040909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:38.215 [2024-05-14 03:05:24.040944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:38.215 [2024-05-14 03:05:24.040965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.041034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.041053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.215 [2024-05-14 03:05:24.041065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:38.215 [2024-05-14 03:05:24.041074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.041106] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:38.215 [2024-05-14 03:05:24.041417] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:38.215 [2024-05-14 03:05:24.041442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.041453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.215 [2024-05-14 03:05:24.041479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:18:38.215 [2024-05-14 03:05:24.041490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.042741] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:38.215 [2024-05-14 03:05:24.045022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.045060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:38.215 [2024-05-14 03:05:24.045092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:18:38.215 [2024-05-14 03:05:24.045212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.045273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.045290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:38.215 [2024-05-14 03:05:24.045302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:38.215 [2024-05-14 03:05:24.045312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.049566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.049604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.215 [2024-05-14 03:05:24.049649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:18:38.215 [2024-05-14 03:05:24.049659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.049742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.049764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.215 [2024-05-14 03:05:24.049775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:38.215 [2024-05-14 03:05:24.049785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.049845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.049861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:38.215 [2024-05-14 03:05:24.049873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:38.215 [2024-05-14 03:05:24.049889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.049944] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:38.215 [2024-05-14 03:05:24.051308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.051344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.215 [2024-05-14 03:05:24.051360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:18:38.215 [2024-05-14 03:05:24.051372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.051410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.051438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:38.215 [2024-05-14 03:05:24.051459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:38.215 [2024-05-14 03:05:24.051490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.051538] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:38.215 [2024-05-14 03:05:24.051599] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:38.215 [2024-05-14 03:05:24.051638] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:38.215 [2024-05-14 03:05:24.051668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:38.215 [2024-05-14 03:05:24.051755] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:38.215 [2024-05-14 03:05:24.051774] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:38.215 [2024-05-14 03:05:24.051791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:38.215 [2024-05-14 03:05:24.051804] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:38.215 [2024-05-14 03:05:24.051816] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:38.215 [2024-05-14 03:05:24.051826] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:38.215 [2024-05-14 03:05:24.051836] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:38.215 [2024-05-14 03:05:24.051845] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:38.215 [2024-05-14 03:05:24.051865] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:38.215 [2024-05-14 03:05:24.051896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.051914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:38.215 [2024-05-14 03:05:24.051924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:38.215 [2024-05-14 03:05:24.051934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.052050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.215 [2024-05-14 03:05:24.052076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:38.215 [2024-05-14 03:05:24.052088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:38.215 [2024-05-14 03:05:24.052103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.215 [2024-05-14 03:05:24.052233] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:38.215 [2024-05-14 03:05:24.052255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:38.215 [2024-05-14 03:05:24.052268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.215 [2024-05-14 03:05:24.052294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.215 [2024-05-14 03:05:24.052362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:38.215 [2024-05-14 03:05:24.052372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:38.215 [2024-05-14 03:05:24.052382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:38.215 [2024-05-14 03:05:24.052393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:38.215 [2024-05-14 03:05:24.052403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:38.215 [2024-05-14 03:05:24.052412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.215 [2024-05-14 03:05:24.052421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:38.215 [2024-05-14 03:05:24.052430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:38.215 [2024-05-14 03:05:24.052439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.215 [2024-05-14 03:05:24.052448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:38.215 [2024-05-14 03:05:24.052457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:38.215 [2024-05-14 03:05:24.052482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.215 [2024-05-14 03:05:24.052493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:38.215 [2024-05-14 03:05:24.052503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:38.215 [2024-05-14 03:05:24.052513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:38.216 [2024-05-14 03:05:24.052532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:38.216 [2024-05-14 03:05:24.052541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:38.216 [2024-05-14 03:05:24.052559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:38.216 [2024-05-14 03:05:24.052585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:38.216 [2024-05-14 03:05:24.052659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:38.216 [2024-05-14 03:05:24.052694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:38.216 [2024-05-14 03:05:24.052738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.216 [2024-05-14 03:05:24.052757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:38.216 [2024-05-14 03:05:24.052767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:38.216 [2024-05-14 03:05:24.052776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.216 [2024-05-14 03:05:24.052785] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:38.216 [2024-05-14 03:05:24.052800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:38.216 [2024-05-14 03:05:24.052811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.216 [2024-05-14 03:05:24.052833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:38.216 [2024-05-14 03:05:24.052843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:38.216 [2024-05-14 03:05:24.052852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:38.216 [2024-05-14 03:05:24.052865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:38.216 [2024-05-14 03:05:24.052876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:38.216 [2024-05-14 03:05:24.052886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:38.216 [2024-05-14 03:05:24.052900] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:38.216 [2024-05-14 03:05:24.052914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.216 [2024-05-14 03:05:24.052926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:38.216 [2024-05-14 03:05:24.052937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:38.216 [2024-05-14 03:05:24.052948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:38.216 [2024-05-14 03:05:24.052959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:38.216 [2024-05-14 03:05:24.052970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:38.216 [2024-05-14 03:05:24.052980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:38.216 [2024-05-14 03:05:24.052991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:38.216 [2024-05-14 03:05:24.053001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:38.216 [2024-05-14 03:05:24.053012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:38.216 [2024-05-14 03:05:24.053023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:38.216 [2024-05-14 03:05:24.053033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:38.216 [2024-05-14 03:05:24.053048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:38.216 [2024-05-14 03:05:24.053060] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:38.216 [2024-05-14 03:05:24.053071] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:38.216 [2024-05-14 03:05:24.053083] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.216 [2024-05-14 03:05:24.053095] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:38.216 [2024-05-14 03:05:24.053106] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:38.216 [2024-05-14 03:05:24.053117] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:38.216 [2024-05-14 03:05:24.053129] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:38.216 [2024-05-14 03:05:24.053140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.053151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:38.216 [2024-05-14 03:05:24.053162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:18:38.216 [2024-05-14 03:05:24.053177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.059138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.059207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.216 [2024-05-14 03:05:24.059226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.898 ms 00:18:38.216 [2024-05-14 03:05:24.059237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.059326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.059340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:38.216 [2024-05-14 03:05:24.059351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:38.216 [2024-05-14 03:05:24.059361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.074941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.075000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.216 [2024-05-14 03:05:24.075036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.526 ms 00:18:38.216 [2024-05-14 03:05:24.075052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.075111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.075127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.216 [2024-05-14 03:05:24.075138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:38.216 [2024-05-14 03:05:24.075203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.075598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.075631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.216 [2024-05-14 03:05:24.075647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:18:38.216 [2024-05-14 03:05:24.075659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.075811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.075852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.216 [2024-05-14 03:05:24.075868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:38.216 [2024-05-14 03:05:24.075879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.081684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.081728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.216 [2024-05-14 03:05:24.081751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:18:38.216 [2024-05-14 03:05:24.081764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.084215] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:38.216 [2024-05-14 03:05:24.084268] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:38.216 [2024-05-14 03:05:24.084331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.084342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:38.216 [2024-05-14 03:05:24.084354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:18:38.216 [2024-05-14 03:05:24.084364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.100519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.100587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:38.216 [2024-05-14 03:05:24.100605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.100 ms 00:18:38.216 [2024-05-14 03:05:24.100630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.102575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.102630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:38.216 [2024-05-14 03:05:24.102647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:18:38.216 [2024-05-14 03:05:24.102657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.216 [2024-05-14 03:05:24.104442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.216 [2024-05-14 03:05:24.104479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:38.216 [2024-05-14 03:05:24.104506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:18:38.217 [2024-05-14 03:05:24.104527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.104781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.104802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:38.217 [2024-05-14 03:05:24.104815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:38.217 [2024-05-14 03:05:24.104826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.123240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.123307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:38.217 [2024-05-14 03:05:24.123341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.372 ms 00:18:38.217 [2024-05-14 03:05:24.123366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.130861] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:38.217 [2024-05-14 03:05:24.133185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.133246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:38.217 [2024-05-14 03:05:24.133278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.755 ms 00:18:38.217 [2024-05-14 03:05:24.133302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.133389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.133408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:38.217 [2024-05-14 03:05:24.133427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:38.217 [2024-05-14 03:05:24.133453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.133573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.133607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:38.217 [2024-05-14 03:05:24.133635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:38.217 [2024-05-14 03:05:24.133655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.135706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.135761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:38.217 [2024-05-14 03:05:24.135791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:18:38.217 [2024-05-14 03:05:24.135812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.135846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.135861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:38.217 [2024-05-14 03:05:24.135872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:38.217 [2024-05-14 03:05:24.135888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.135928] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:38.217 [2024-05-14 03:05:24.135947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.135958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:38.217 [2024-05-14 03:05:24.135996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:38.217 [2024-05-14 03:05:24.136007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.139415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.139453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:38.217 [2024-05-14 03:05:24.139485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:18:38.217 [2024-05-14 03:05:24.139503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.139590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.217 [2024-05-14 03:05:24.139607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:38.217 [2024-05-14 03:05:24.139618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:38.217 [2024-05-14 03:05:24.139644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.217 [2024-05-14 03:05:24.140881] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.511 ms, result 0 00:19:20.007  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 98/1024 [MB] (24 MBps) Copying: 122/1024 [MB] (24 MBps) Copying: 147/1024 [MB] (24 MBps) Copying: 171/1024 [MB] (24 MBps) Copying: 196/1024 [MB] (24 MBps) Copying: 221/1024 [MB] (24 MBps) Copying: 245/1024 [MB] (24 MBps) Copying: 270/1024 [MB] (25 MBps) Copying: 295/1024 [MB] (24 MBps) Copying: 320/1024 [MB] (24 MBps) Copying: 344/1024 [MB] (24 MBps) Copying: 369/1024 [MB] (24 MBps) Copying: 392/1024 [MB] (23 MBps) Copying: 417/1024 [MB] (24 MBps) Copying: 442/1024 [MB] (24 MBps) Copying: 466/1024 [MB] (24 MBps) Copying: 490/1024 [MB] (23 MBps) Copying: 515/1024 [MB] (24 MBps) Copying: 539/1024 [MB] (24 MBps) Copying: 564/1024 [MB] (24 MBps) Copying: 589/1024 [MB] (24 MBps) Copying: 614/1024 [MB] (24 MBps) Copying: 637/1024 [MB] (23 MBps) Copying: 662/1024 [MB] (24 MBps) Copying: 687/1024 [MB] (25 MBps) Copying: 712/1024 [MB] (24 MBps) Copying: 737/1024 [MB] (25 MBps) Copying: 761/1024 [MB] (24 MBps) Copying: 787/1024 [MB] (25 MBps) Copying: 812/1024 [MB] (25 MBps) Copying: 837/1024 [MB] (24 MBps) Copying: 862/1024 [MB] (24 MBps) Copying: 886/1024 [MB] (24 MBps) Copying: 912/1024 [MB] (25 MBps) Copying: 937/1024 [MB] (25 MBps) Copying: 964/1024 [MB] (26 MBps) Copying: 989/1024 [MB] (25 MBps) Copying: 1013/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-14 03:06:05.904690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.904992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.007 [2024-05-14 03:06:05.905153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.007 [2024-05-14 03:06:05.905213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.905390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.007 [2024-05-14 03:06:05.905901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.906036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.007 [2024-05-14 03:06:05.906180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:19:20.007 [2024-05-14 03:06:05.906300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.906592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.906728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.007 [2024-05-14 03:06:05.906843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:19:20.007 [2024-05-14 03:06:05.906894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.911502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.911692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.007 [2024-05-14 03:06:05.911820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.498 ms 00:19:20.007 [2024-05-14 03:06:05.911937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.918358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.918551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:20.007 [2024-05-14 03:06:05.918671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.348 ms 00:19:20.007 [2024-05-14 03:06:05.918717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.920171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.920325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.007 [2024-05-14 03:06:05.920485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:19:20.007 [2024-05-14 03:06:05.920533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.923784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.923938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.007 [2024-05-14 03:06:05.923978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:19:20.007 [2024-05-14 03:06:05.923998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.924177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.924215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.007 [2024-05-14 03:06:05.924228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:20.007 [2024-05-14 03:06:05.924239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.007 [2024-05-14 03:06:05.926099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.007 [2024-05-14 03:06:05.926162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:20.007 [2024-05-14 03:06:05.926194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:19:20.007 [2024-05-14 03:06:05.926204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.008 [2024-05-14 03:06:05.927490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.008 [2024-05-14 03:06:05.927539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:20.008 [2024-05-14 03:06:05.927570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:19:20.008 [2024-05-14 03:06:05.927594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.008 [2024-05-14 03:06:05.928887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.008 [2024-05-14 03:06:05.928954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.008 [2024-05-14 03:06:05.928997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:19:20.008 [2024-05-14 03:06:05.929007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.008 [2024-05-14 03:06:05.930331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.008 [2024-05-14 03:06:05.930380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.008 [2024-05-14 03:06:05.930410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:19:20.008 [2024-05-14 03:06:05.930420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.008 [2024-05-14 03:06:05.930465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.008 [2024-05-14 03:06:05.930486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.930991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.931995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.008 [2024-05-14 03:06:05.932482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.009 [2024-05-14 03:06:05.932635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.009 [2024-05-14 03:06:05.932646] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 212fbdc4-0917-4fb4-873d-5428fad743dd 00:19:20.009 [2024-05-14 03:06:05.932656] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.009 [2024-05-14 03:06:05.932666] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.009 [2024-05-14 03:06:05.932683] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.009 [2024-05-14 03:06:05.932694] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.009 [2024-05-14 03:06:05.932703] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.009 [2024-05-14 03:06:05.932714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.009 [2024-05-14 03:06:05.932724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.009 [2024-05-14 03:06:05.932733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.009 [2024-05-14 03:06:05.932742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.009 [2024-05-14 03:06:05.932752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.009 [2024-05-14 03:06:05.932763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.009 [2024-05-14 03:06:05.932774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:19:20.009 [2024-05-14 03:06:05.932797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.934116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.009 [2024-05-14 03:06:05.934143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.009 [2024-05-14 03:06:05.934156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:19:20.009 [2024-05-14 03:06:05.934168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.934254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.009 [2024-05-14 03:06:05.934271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.009 [2024-05-14 03:06:05.934295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:20.009 [2024-05-14 03:06:05.934305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.939115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.939183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.009 [2024-05-14 03:06:05.939215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.939226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.939281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.939294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.009 [2024-05-14 03:06:05.939305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.939314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.939394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.939413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.009 [2024-05-14 03:06:05.939424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.939442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.939468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.939483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.009 [2024-05-14 03:06:05.939493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.939503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.947251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.947303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.009 [2024-05-14 03:06:05.947336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.947346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.009 [2024-05-14 03:06:05.951219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.009 [2024-05-14 03:06:05.951322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.009 [2024-05-14 03:06:05.951436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.009 [2024-05-14 03:06:05.951585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.009 [2024-05-14 03:06:05.951671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.009 [2024-05-14 03:06:05.951762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.951841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.009 [2024-05-14 03:06:05.951858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.009 [2024-05-14 03:06:05.951870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.009 [2024-05-14 03:06:05.951880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.009 [2024-05-14 03:06:05.952026] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.296 ms, result 0 00:19:20.268 00:19:20.268 00:19:20.268 03:06:06 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:22.174 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:22.174 03:06:08 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:22.432 [2024-05-14 03:06:08.251870] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:19:22.432 [2024-05-14 03:06:08.252040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91885 ] 00:19:22.432 [2024-05-14 03:06:08.387332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:22.432 [2024-05-14 03:06:08.411321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.432 [2024-05-14 03:06:08.453979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.692 [2024-05-14 03:06:08.543932] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.692 [2024-05-14 03:06:08.544048] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.692 [2024-05-14 03:06:08.694528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.692 [2024-05-14 03:06:08.694579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.692 [2024-05-14 03:06:08.694614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.692 [2024-05-14 03:06:08.694624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.692 [2024-05-14 03:06:08.694693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.692 [2024-05-14 03:06:08.694712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.692 [2024-05-14 03:06:08.694723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:22.692 [2024-05-14 03:06:08.694740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.694779] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.693 [2024-05-14 03:06:08.695035] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.693 [2024-05-14 03:06:08.695068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.695078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.693 [2024-05-14 03:06:08.695093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:19:22.693 [2024-05-14 03:06:08.695102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.696445] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.693 [2024-05-14 03:06:08.698561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.698598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.693 [2024-05-14 03:06:08.698630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:19:22.693 [2024-05-14 03:06:08.698654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.698712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.698730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.693 [2024-05-14 03:06:08.698741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:22.693 [2024-05-14 03:06:08.698750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.703101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.703190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.693 [2024-05-14 03:06:08.703227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:19:22.693 [2024-05-14 03:06:08.703237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.703321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.703344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.693 [2024-05-14 03:06:08.703365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:22.693 [2024-05-14 03:06:08.703375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.703441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.703458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.693 [2024-05-14 03:06:08.703469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:22.693 [2024-05-14 03:06:08.703485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.703554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.693 [2024-05-14 03:06:08.704951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.704986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.693 [2024-05-14 03:06:08.705017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:19:22.693 [2024-05-14 03:06:08.705026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.705059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.705074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.693 [2024-05-14 03:06:08.705085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:22.693 [2024-05-14 03:06:08.705100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.705134] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.693 [2024-05-14 03:06:08.705205] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:22.693 [2024-05-14 03:06:08.705255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.693 [2024-05-14 03:06:08.705298] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:22.693 [2024-05-14 03:06:08.705377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:22.693 [2024-05-14 03:06:08.705395] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.693 [2024-05-14 03:06:08.705411] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:22.693 [2024-05-14 03:06:08.705425] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.693 [2024-05-14 03:06:08.705437] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.693 [2024-05-14 03:06:08.705465] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.693 [2024-05-14 03:06:08.705491] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.693 [2024-05-14 03:06:08.705501] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:22.693 [2024-05-14 03:06:08.705524] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:22.693 [2024-05-14 03:06:08.705536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.705556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.693 [2024-05-14 03:06:08.705568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:19:22.693 [2024-05-14 03:06:08.705616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.705689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.693 [2024-05-14 03:06:08.705711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.693 [2024-05-14 03:06:08.705723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:22.693 [2024-05-14 03:06:08.705737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.693 [2024-05-14 03:06:08.705820] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.693 [2024-05-14 03:06:08.705836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.693 [2024-05-14 03:06:08.705848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.693 [2024-05-14 03:06:08.705858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.693 [2024-05-14 03:06:08.705869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.693 [2024-05-14 03:06:08.705878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.693 [2024-05-14 03:06:08.705888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.693 [2024-05-14 03:06:08.705897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.693 [2024-05-14 03:06:08.705907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.693 [2024-05-14 03:06:08.705916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.693 [2024-05-14 03:06:08.705925] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.693 [2024-05-14 03:06:08.705934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.693 [2024-05-14 03:06:08.705944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.693 [2024-05-14 03:06:08.705953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.693 [2024-05-14 03:06:08.705962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:22.693 [2024-05-14 03:06:08.705989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.693 [2024-05-14 03:06:08.705999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.693 [2024-05-14 03:06:08.706009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:22.693 [2024-05-14 03:06:08.706018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.693 [2024-05-14 03:06:08.706028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:22.693 [2024-05-14 03:06:08.706038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:22.693 [2024-05-14 03:06:08.706048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:22.693 [2024-05-14 03:06:08.706057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.693 [2024-05-14 03:06:08.706066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.693 [2024-05-14 03:06:08.706075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.693 [2024-05-14 03:06:08.706084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.693 [2024-05-14 03:06:08.706094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:22.693 [2024-05-14 03:06:08.706102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.693 [2024-05-14 03:06:08.706112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.693 [2024-05-14 03:06:08.706120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.693 [2024-05-14 03:06:08.706129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.693 [2024-05-14 03:06:08.706144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.693 [2024-05-14 03:06:08.706154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:22.693 [2024-05-14 03:06:08.706163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.694 [2024-05-14 03:06:08.706172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.694 [2024-05-14 03:06:08.706181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.694 [2024-05-14 03:06:08.706190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.694 [2024-05-14 03:06:08.706199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.694 [2024-05-14 03:06:08.706209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:22.694 [2024-05-14 03:06:08.706218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.694 [2024-05-14 03:06:08.706246] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.694 [2024-05-14 03:06:08.706263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.694 [2024-05-14 03:06:08.706274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.694 [2024-05-14 03:06:08.706283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.694 [2024-05-14 03:06:08.706293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.694 [2024-05-14 03:06:08.706303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.694 [2024-05-14 03:06:08.706312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.694 [2024-05-14 03:06:08.706325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.694 [2024-05-14 03:06:08.706334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.694 [2024-05-14 03:06:08.706344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.694 [2024-05-14 03:06:08.706354] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.694 [2024-05-14 03:06:08.706368] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.694 [2024-05-14 03:06:08.706379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.694 [2024-05-14 03:06:08.706390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:22.694 [2024-05-14 03:06:08.706400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:22.694 [2024-05-14 03:06:08.706410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:22.694 [2024-05-14 03:06:08.706421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:22.694 [2024-05-14 03:06:08.706431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:22.694 [2024-05-14 03:06:08.706441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:22.694 [2024-05-14 03:06:08.706451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:22.694 [2024-05-14 03:06:08.706462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:22.694 [2024-05-14 03:06:08.706472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:22.694 [2024-05-14 03:06:08.706482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:22.694 [2024-05-14 03:06:08.706495] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:22.694 [2024-05-14 03:06:08.706507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:22.694 [2024-05-14 03:06:08.706517] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.694 [2024-05-14 03:06:08.706528] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.694 [2024-05-14 03:06:08.706539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.694 [2024-05-14 03:06:08.706550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.694 [2024-05-14 03:06:08.706560] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.694 [2024-05-14 03:06:08.706570] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.694 [2024-05-14 03:06:08.706582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.694 [2024-05-14 03:06:08.706593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.694 [2024-05-14 03:06:08.706604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:19:22.694 [2024-05-14 03:06:08.706626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.694 [2024-05-14 03:06:08.712743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.694 [2024-05-14 03:06:08.712784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.694 [2024-05-14 03:06:08.712817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:19:22.694 [2024-05-14 03:06:08.712827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.694 [2024-05-14 03:06:08.712916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.694 [2024-05-14 03:06:08.712931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.694 [2024-05-14 03:06:08.712943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:22.694 [2024-05-14 03:06:08.712964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.733231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.733294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.953 [2024-05-14 03:06:08.733319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.216 ms 00:19:22.953 [2024-05-14 03:06:08.733353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.733426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.733448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.953 [2024-05-14 03:06:08.733464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.953 [2024-05-14 03:06:08.733498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.733937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.733971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.953 [2024-05-14 03:06:08.733989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:19:22.953 [2024-05-14 03:06:08.734003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.734237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.734264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.953 [2024-05-14 03:06:08.734280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:22.953 [2024-05-14 03:06:08.734293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.739856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.739894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.953 [2024-05-14 03:06:08.739932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.520 ms 00:19:22.953 [2024-05-14 03:06:08.739951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.742354] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:22.953 [2024-05-14 03:06:08.742407] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:22.953 [2024-05-14 03:06:08.742446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.742457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:22.953 [2024-05-14 03:06:08.742468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.339 ms 00:19:22.953 [2024-05-14 03:06:08.742477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.756665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.756704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:22.953 [2024-05-14 03:06:08.756737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.146 ms 00:19:22.953 [2024-05-14 03:06:08.756765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.758623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.758660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:22.953 [2024-05-14 03:06:08.758690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:19:22.953 [2024-05-14 03:06:08.758700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.760503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.760539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:22.953 [2024-05-14 03:06:08.760585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:19:22.953 [2024-05-14 03:06:08.760594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.760807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.760828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:22.953 [2024-05-14 03:06:08.760839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:22.953 [2024-05-14 03:06:08.760849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.778723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.778786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:22.953 [2024-05-14 03:06:08.778821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.847 ms 00:19:22.953 [2024-05-14 03:06:08.778832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.786469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:22.953 [2024-05-14 03:06:08.788781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.788813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:22.953 [2024-05-14 03:06:08.788844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.890 ms 00:19:22.953 [2024-05-14 03:06:08.788865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.788946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.788963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:22.953 [2024-05-14 03:06:08.788975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.953 [2024-05-14 03:06:08.788990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.789064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.789085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:22.953 [2024-05-14 03:06:08.789096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:22.953 [2024-05-14 03:06:08.789106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.791056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.791109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:22.953 [2024-05-14 03:06:08.791139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:19:22.953 [2024-05-14 03:06:08.791205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.791234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.791249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:22.953 [2024-05-14 03:06:08.791260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:22.953 [2024-05-14 03:06:08.791276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.791316] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:22.953 [2024-05-14 03:06:08.791336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.791347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:22.953 [2024-05-14 03:06:08.791357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:22.953 [2024-05-14 03:06:08.791367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.794799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.794838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:22.953 [2024-05-14 03:06:08.794870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:19:22.953 [2024-05-14 03:06:08.794895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.794965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.953 [2024-05-14 03:06:08.794991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:22.953 [2024-05-14 03:06:08.795002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:22.953 [2024-05-14 03:06:08.795015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.953 [2024-05-14 03:06:08.796235] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.145 ms, result 0 00:20:05.675  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (24 MBps) Copying: 122/1024 [MB] (25 MBps) Copying: 147/1024 [MB] (24 MBps) Copying: 173/1024 [MB] (25 MBps) Copying: 198/1024 [MB] (24 MBps) Copying: 222/1024 [MB] (24 MBps) Copying: 247/1024 [MB] (24 MBps) Copying: 271/1024 [MB] (24 MBps) Copying: 295/1024 [MB] (24 MBps) Copying: 320/1024 [MB] (24 MBps) Copying: 345/1024 [MB] (24 MBps) Copying: 370/1024 [MB] (24 MBps) Copying: 395/1024 [MB] (25 MBps) Copying: 421/1024 [MB] (25 MBps) Copying: 446/1024 [MB] (25 MBps) Copying: 471/1024 [MB] (25 MBps) Copying: 496/1024 [MB] (25 MBps) Copying: 521/1024 [MB] (25 MBps) Copying: 546/1024 [MB] (24 MBps) Copying: 572/1024 [MB] (25 MBps) Copying: 597/1024 [MB] (25 MBps) Copying: 622/1024 [MB] (24 MBps) Copying: 647/1024 [MB] (25 MBps) Copying: 671/1024 [MB] (23 MBps) Copying: 695/1024 [MB] (23 MBps) Copying: 719/1024 [MB] (24 MBps) Copying: 743/1024 [MB] (23 MBps) Copying: 766/1024 [MB] (23 MBps) Copying: 790/1024 [MB] (23 MBps) Copying: 814/1024 [MB] (24 MBps) Copying: 838/1024 [MB] (24 MBps) Copying: 862/1024 [MB] (23 MBps) Copying: 886/1024 [MB] (23 MBps) Copying: 910/1024 [MB] (23 MBps) Copying: 934/1024 [MB] (23 MBps) Copying: 959/1024 [MB] (25 MBps) Copying: 984/1024 [MB] (25 MBps) Copying: 1009/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (13 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-14 03:06:51.469477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.469697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.675 [2024-05-14 03:06:51.469893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:05.675 [2024-05-14 03:06:51.469933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.472320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.675 [2024-05-14 03:06:51.476248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.476298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.675 [2024-05-14 03:06:51.476316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:20:05.675 [2024-05-14 03:06:51.476328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.489315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.489360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.675 [2024-05-14 03:06:51.489380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.742 ms 00:20:05.675 [2024-05-14 03:06:51.489391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.509986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.510044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.675 [2024-05-14 03:06:51.510078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.563 ms 00:20:05.675 [2024-05-14 03:06:51.510089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.516474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.516508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:05.675 [2024-05-14 03:06:51.516523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.321 ms 00:20:05.675 [2024-05-14 03:06:51.516532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.517987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.518057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.675 [2024-05-14 03:06:51.518087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:20:05.675 [2024-05-14 03:06:51.518097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.521294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.521345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.675 [2024-05-14 03:06:51.521375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:20:05.675 [2024-05-14 03:06:51.521401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.623490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.623536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.675 [2024-05-14 03:06:51.623555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.035 ms 00:20:05.675 [2024-05-14 03:06:51.623566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.625474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.625510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:05.675 [2024-05-14 03:06:51.625524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:20:05.675 [2024-05-14 03:06:51.625534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.626969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.627006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:05.675 [2024-05-14 03:06:51.627021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:20:05.675 [2024-05-14 03:06:51.627030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.628278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.628314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.675 [2024-05-14 03:06:51.628345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:20:05.675 [2024-05-14 03:06:51.628356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.629563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.675 [2024-05-14 03:06:51.629599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.675 [2024-05-14 03:06:51.629613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:20:05.675 [2024-05-14 03:06:51.629622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.675 [2024-05-14 03:06:51.629655] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.675 [2024-05-14 03:06:51.629691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114944 / 261120 wr_cnt: 1 state: open 00:20:05.675 [2024-05-14 03:06:51.629710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.675 [2024-05-14 03:06:51.629867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.629991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.676 [2024-05-14 03:06:51.630762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.676 [2024-05-14 03:06:51.630772] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 212fbdc4-0917-4fb4-873d-5428fad743dd 00:20:05.676 [2024-05-14 03:06:51.630782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114944 00:20:05.676 [2024-05-14 03:06:51.630792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115904 00:20:05.676 [2024-05-14 03:06:51.630801] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114944 00:20:05.676 [2024-05-14 03:06:51.630811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:20:05.676 [2024-05-14 03:06:51.630830] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.676 [2024-05-14 03:06:51.630839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.676 [2024-05-14 03:06:51.630849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.676 [2024-05-14 03:06:51.630857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.676 [2024-05-14 03:06:51.630866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.677 [2024-05-14 03:06:51.630876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.677 [2024-05-14 03:06:51.630890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.677 [2024-05-14 03:06:51.630900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:20:05.677 [2024-05-14 03:06:51.630917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.632286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.677 [2024-05-14 03:06:51.632315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.677 [2024-05-14 03:06:51.632339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:20:05.677 [2024-05-14 03:06:51.632350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.632417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.677 [2024-05-14 03:06:51.632451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.677 [2024-05-14 03:06:51.632471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:05.677 [2024-05-14 03:06:51.632504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.637512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.637684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.677 [2024-05-14 03:06:51.637790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.637846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.637929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.638008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.677 [2024-05-14 03:06:51.638058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.638094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.638208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.638327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.677 [2024-05-14 03:06:51.638380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.638415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.638468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.638533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.677 [2024-05-14 03:06:51.638622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.638679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.646190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.646414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.677 [2024-05-14 03:06:51.646563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.646623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.650126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.650312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.677 [2024-05-14 03:06:51.650436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.650484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.650632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.650709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.677 [2024-05-14 03:06:51.650823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.650873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.650962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.651045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.677 [2024-05-14 03:06:51.651090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.651191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.651347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.651427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.677 [2024-05-14 03:06:51.651530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.651551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.651604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.651621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.677 [2024-05-14 03:06:51.651639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.651664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.651707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.651721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.677 [2024-05-14 03:06:51.651732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.651742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.651788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.677 [2024-05-14 03:06:51.651809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.677 [2024-05-14 03:06:51.651820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.677 [2024-05-14 03:06:51.651830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.677 [2024-05-14 03:06:51.652000] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 183.622 ms, result 0 00:20:06.613 00:20:06.613 00:20:06.613 03:06:52 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:06.613 [2024-05-14 03:06:52.411228] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:20:06.613 [2024-05-14 03:06:52.411376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92336 ] 00:20:06.613 [2024-05-14 03:06:52.547408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:06.613 [2024-05-14 03:06:52.566812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.613 [2024-05-14 03:06:52.601374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.873 [2024-05-14 03:06:52.685909] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.873 [2024-05-14 03:06:52.686023] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.873 [2024-05-14 03:06:52.837444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.837495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.873 [2024-05-14 03:06:52.837547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:06.873 [2024-05-14 03:06:52.837557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.837636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.837672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.873 [2024-05-14 03:06:52.837684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:06.873 [2024-05-14 03:06:52.837693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.837729] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.873 [2024-05-14 03:06:52.837995] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.873 [2024-05-14 03:06:52.838041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.838060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.873 [2024-05-14 03:06:52.838075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:06.873 [2024-05-14 03:06:52.838085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.839342] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.873 [2024-05-14 03:06:52.841667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.841717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.873 [2024-05-14 03:06:52.841739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:20:06.873 [2024-05-14 03:06:52.841763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.841872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.841906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.873 [2024-05-14 03:06:52.841935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:06.873 [2024-05-14 03:06:52.841946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.846375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.846413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.873 [2024-05-14 03:06:52.846460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:20:06.873 [2024-05-14 03:06:52.846472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.846607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.846624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.873 [2024-05-14 03:06:52.846636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:06.873 [2024-05-14 03:06:52.846646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.846737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.846754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.873 [2024-05-14 03:06:52.846766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:06.873 [2024-05-14 03:06:52.846798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.846840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.873 [2024-05-14 03:06:52.848316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.848351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.873 [2024-05-14 03:06:52.848367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:20:06.873 [2024-05-14 03:06:52.848377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.848457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.848476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.873 [2024-05-14 03:06:52.848503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:06.873 [2024-05-14 03:06:52.848516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.848563] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.873 [2024-05-14 03:06:52.848589] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:06.873 [2024-05-14 03:06:52.848643] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.873 [2024-05-14 03:06:52.848681] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:06.873 [2024-05-14 03:06:52.848753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:06.873 [2024-05-14 03:06:52.848768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.873 [2024-05-14 03:06:52.848786] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:06.873 [2024-05-14 03:06:52.848800] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.873 [2024-05-14 03:06:52.848811] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.873 [2024-05-14 03:06:52.848822] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:06.873 [2024-05-14 03:06:52.848840] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.873 [2024-05-14 03:06:52.848850] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:06.873 [2024-05-14 03:06:52.848866] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:06.873 [2024-05-14 03:06:52.848883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.848894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.873 [2024-05-14 03:06:52.848910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:06.873 [2024-05-14 03:06:52.848927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.849019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.873 [2024-05-14 03:06:52.849033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.873 [2024-05-14 03:06:52.849042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:06.873 [2024-05-14 03:06:52.849051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.873 [2024-05-14 03:06:52.849124] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.873 [2024-05-14 03:06:52.849154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.873 [2024-05-14 03:06:52.849165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.873 [2024-05-14 03:06:52.849245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.873 [2024-05-14 03:06:52.849280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.873 [2024-05-14 03:06:52.849315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.873 [2024-05-14 03:06:52.849324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:06.873 [2024-05-14 03:06:52.849333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.873 [2024-05-14 03:06:52.849343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.873 [2024-05-14 03:06:52.849353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:06.873 [2024-05-14 03:06:52.849372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.873 [2024-05-14 03:06:52.849398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:06.873 [2024-05-14 03:06:52.849408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:06.873 [2024-05-14 03:06:52.849427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:06.873 [2024-05-14 03:06:52.849436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.873 [2024-05-14 03:06:52.849475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.873 [2024-05-14 03:06:52.849510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.873 [2024-05-14 03:06:52.849538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.873 [2024-05-14 03:06:52.849567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:06.873 [2024-05-14 03:06:52.849604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.873 [2024-05-14 03:06:52.849629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:06.873 [2024-05-14 03:06:52.849639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.873 [2024-05-14 03:06:52.849665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.873 [2024-05-14 03:06:52.849675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:06.874 [2024-05-14 03:06:52.849688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.874 [2024-05-14 03:06:52.849699] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.874 [2024-05-14 03:06:52.849713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.874 [2024-05-14 03:06:52.849724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.874 [2024-05-14 03:06:52.849734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.874 [2024-05-14 03:06:52.849745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.874 [2024-05-14 03:06:52.849755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.874 [2024-05-14 03:06:52.849765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.874 [2024-05-14 03:06:52.849776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.874 [2024-05-14 03:06:52.849785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.874 [2024-05-14 03:06:52.849795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.874 [2024-05-14 03:06:52.849806] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.874 [2024-05-14 03:06:52.849820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.874 [2024-05-14 03:06:52.849841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:06.874 [2024-05-14 03:06:52.849853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:06.874 [2024-05-14 03:06:52.849864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:06.874 [2024-05-14 03:06:52.849878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:06.874 [2024-05-14 03:06:52.849891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:06.874 [2024-05-14 03:06:52.849902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:06.874 [2024-05-14 03:06:52.849913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:06.874 [2024-05-14 03:06:52.849924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:06.874 [2024-05-14 03:06:52.849935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:06.874 [2024-05-14 03:06:52.849946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:06.874 [2024-05-14 03:06:52.849957] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:06.874 [2024-05-14 03:06:52.849968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:06.874 [2024-05-14 03:06:52.849980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:06.874 [2024-05-14 03:06:52.850007] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.874 [2024-05-14 03:06:52.850019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.874 [2024-05-14 03:06:52.850046] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.874 [2024-05-14 03:06:52.850056] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.874 [2024-05-14 03:06:52.850067] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.874 [2024-05-14 03:06:52.850077] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.874 [2024-05-14 03:06:52.850106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.850118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.874 [2024-05-14 03:06:52.850128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:20:06.874 [2024-05-14 03:06:52.850157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.856474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.856544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.874 [2024-05-14 03:06:52.856575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:20:06.874 [2024-05-14 03:06:52.856590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.856690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.856704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.874 [2024-05-14 03:06:52.856716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:06.874 [2024-05-14 03:06:52.856725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.877223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.877291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.874 [2024-05-14 03:06:52.877316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.433 ms 00:20:06.874 [2024-05-14 03:06:52.877331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.877406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.877427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.874 [2024-05-14 03:06:52.877443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.874 [2024-05-14 03:06:52.877465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.877896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.877925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.874 [2024-05-14 03:06:52.877942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:20:06.874 [2024-05-14 03:06:52.877955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.878186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.878213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.874 [2024-05-14 03:06:52.878228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:20:06.874 [2024-05-14 03:06:52.878242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.884272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.884312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.874 [2024-05-14 03:06:52.884329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:20:06.874 [2024-05-14 03:06:52.884340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-05-14 03:06:52.886636] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:06.874 [2024-05-14 03:06:52.886691] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.874 [2024-05-14 03:06:52.886730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-05-14 03:06:52.886741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.874 [2024-05-14 03:06:52.886757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:20:06.874 [2024-05-14 03:06:52.886768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.902454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.902510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:07.133 [2024-05-14 03:06:52.902542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.644 ms 00:20:07.133 [2024-05-14 03:06:52.902559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.904600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.904635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:07.133 [2024-05-14 03:06:52.904664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:20:07.133 [2024-05-14 03:06:52.904685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.906437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.906471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:07.133 [2024-05-14 03:06:52.906501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:20:07.133 [2024-05-14 03:06:52.906510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.906758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.906783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:07.133 [2024-05-14 03:06:52.906802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:20:07.133 [2024-05-14 03:06:52.906812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.927495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.927579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:07.133 [2024-05-14 03:06:52.927615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.657 ms 00:20:07.133 [2024-05-14 03:06:52.927626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.936295] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:07.133 [2024-05-14 03:06:52.939011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.939074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.133 [2024-05-14 03:06:52.939116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.303 ms 00:20:07.133 [2024-05-14 03:06:52.939126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.939258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.939294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.133 [2024-05-14 03:06:52.939307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:07.133 [2024-05-14 03:06:52.939317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.940700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.940740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.133 [2024-05-14 03:06:52.940772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:20:07.133 [2024-05-14 03:06:52.940786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.942877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.942937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:07.133 [2024-05-14 03:06:52.942968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:20:07.133 [2024-05-14 03:06:52.943004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.943067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.943081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.133 [2024-05-14 03:06:52.943091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:07.133 [2024-05-14 03:06:52.943106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.943152] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.133 [2024-05-14 03:06:52.943167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.943194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.133 [2024-05-14 03:06:52.943221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:07.133 [2024-05-14 03:06:52.943252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.946884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.946925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.133 [2024-05-14 03:06:52.946970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:20:07.133 [2024-05-14 03:06:52.946987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.947097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.133 [2024-05-14 03:06:52.947166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.133 [2024-05-14 03:06:52.947200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:07.133 [2024-05-14 03:06:52.947211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-05-14 03:06:52.954905] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.598 ms, result 0 00:20:48.157  Copying: 21/1024 [MB] (21 MBps) Copying: 46/1024 [MB] (24 MBps) Copying: 70/1024 [MB] (24 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (24 MBps) Copying: 143/1024 [MB] (24 MBps) Copying: 168/1024 [MB] (25 MBps) Copying: 192/1024 [MB] (24 MBps) Copying: 217/1024 [MB] (24 MBps) Copying: 242/1024 [MB] (24 MBps) Copying: 267/1024 [MB] (25 MBps) Copying: 296/1024 [MB] (28 MBps) Copying: 323/1024 [MB] (27 MBps) Copying: 351/1024 [MB] (27 MBps) Copying: 379/1024 [MB] (27 MBps) Copying: 407/1024 [MB] (27 MBps) Copying: 434/1024 [MB] (27 MBps) Copying: 461/1024 [MB] (27 MBps) Copying: 489/1024 [MB] (27 MBps) Copying: 516/1024 [MB] (27 MBps) Copying: 542/1024 [MB] (25 MBps) Copying: 566/1024 [MB] (24 MBps) Copying: 593/1024 [MB] (26 MBps) Copying: 619/1024 [MB] (26 MBps) Copying: 645/1024 [MB] (25 MBps) Copying: 670/1024 [MB] (25 MBps) Copying: 696/1024 [MB] (25 MBps) Copying: 720/1024 [MB] (23 MBps) Copying: 745/1024 [MB] (24 MBps) Copying: 770/1024 [MB] (24 MBps) Copying: 795/1024 [MB] (25 MBps) Copying: 820/1024 [MB] (24 MBps) Copying: 845/1024 [MB] (25 MBps) Copying: 869/1024 [MB] (24 MBps) Copying: 894/1024 [MB] (24 MBps) Copying: 918/1024 [MB] (24 MBps) Copying: 943/1024 [MB] (24 MBps) Copying: 967/1024 [MB] (23 MBps) Copying: 991/1024 [MB] (23 MBps) Copying: 1015/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-05-14 03:07:33.937977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.938074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.157 [2024-05-14 03:07:33.938096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:48.157 [2024-05-14 03:07:33.938108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:33.938197] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.157 [2024-05-14 03:07:33.938679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.938716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.157 [2024-05-14 03:07:33.938739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:20:48.157 [2024-05-14 03:07:33.938750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:33.938994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.939012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.157 [2024-05-14 03:07:33.939024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:20:48.157 [2024-05-14 03:07:33.939035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:33.944488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.944533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.157 [2024-05-14 03:07:33.944575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.426 ms 00:20:48.157 [2024-05-14 03:07:33.944587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:33.951370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.951407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:48.157 [2024-05-14 03:07:33.951423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.737 ms 00:20:48.157 [2024-05-14 03:07:33.951434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:33.953265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.953308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.157 [2024-05-14 03:07:33.953324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:20:48.157 [2024-05-14 03:07:33.953336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:33.956001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:33.956045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.157 [2024-05-14 03:07:33.956063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:20:48.157 [2024-05-14 03:07:33.956075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:34.069623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:34.069708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.157 [2024-05-14 03:07:34.069731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.491 ms 00:20:48.157 [2024-05-14 03:07:34.069743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:34.071778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:34.071817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:48.157 [2024-05-14 03:07:34.071833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.999 ms 00:20:48.157 [2024-05-14 03:07:34.071843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:34.073423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:34.073459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:48.157 [2024-05-14 03:07:34.073473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:20:48.157 [2024-05-14 03:07:34.073483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:34.074704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:34.074744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.157 [2024-05-14 03:07:34.074760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:20:48.157 [2024-05-14 03:07:34.074786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:34.076006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.157 [2024-05-14 03:07:34.076043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.157 [2024-05-14 03:07:34.076074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:20:48.157 [2024-05-14 03:07:34.076085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.157 [2024-05-14 03:07:34.076159] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.157 [2024-05-14 03:07:34.076186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:20:48.157 [2024-05-14 03:07:34.076201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.157 [2024-05-14 03:07:34.076571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.076997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.158 [2024-05-14 03:07:34.077372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.158 [2024-05-14 03:07:34.077383] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 212fbdc4-0917-4fb4-873d-5428fad743dd 00:20:48.158 [2024-05-14 03:07:34.077394] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:20:48.158 [2024-05-14 03:07:34.077405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 19648 00:20:48.158 [2024-05-14 03:07:34.077415] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 18688 00:20:48.158 [2024-05-14 03:07:34.077426] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0514 00:20:48.158 [2024-05-14 03:07:34.077436] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.158 [2024-05-14 03:07:34.077446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.158 [2024-05-14 03:07:34.077457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.158 [2024-05-14 03:07:34.077466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.158 [2024-05-14 03:07:34.077475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.158 [2024-05-14 03:07:34.077486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.158 [2024-05-14 03:07:34.077513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.158 [2024-05-14 03:07:34.077525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:20:48.158 [2024-05-14 03:07:34.077557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.158 [2024-05-14 03:07:34.078871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.158 [2024-05-14 03:07:34.078900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.158 [2024-05-14 03:07:34.078915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:20:48.158 [2024-05-14 03:07:34.078926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.158 [2024-05-14 03:07:34.078992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.158 [2024-05-14 03:07:34.079021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.158 [2024-05-14 03:07:34.079033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:48.158 [2024-05-14 03:07:34.079054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.158 [2024-05-14 03:07:34.084213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.158 [2024-05-14 03:07:34.084389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.158 [2024-05-14 03:07:34.084513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.084665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.084746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.084762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.159 [2024-05-14 03:07:34.084775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.084786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.084867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.084896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.159 [2024-05-14 03:07:34.084908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.084918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.084960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.084975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.159 [2024-05-14 03:07:34.084985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.084995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.093114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.093193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.159 [2024-05-14 03:07:34.093239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.093250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.159 [2024-05-14 03:07:34.097110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.159 [2024-05-14 03:07:34.097224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.159 [2024-05-14 03:07:34.097341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.159 [2024-05-14 03:07:34.097465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.159 [2024-05-14 03:07:34.097593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.159 [2024-05-14 03:07:34.097675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.159 [2024-05-14 03:07:34.097759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.159 [2024-05-14 03:07:34.097781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.159 [2024-05-14 03:07:34.097792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.159 [2024-05-14 03:07:34.097936] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 159.925 ms, result 0 00:20:48.416 00:20:48.416 00:20:48.416 03:07:34 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.941 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:50.941 03:07:36 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:50.941 03:07:36 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90781 00:20:50.942 03:07:36 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90781 ']' 00:20:50.942 03:07:36 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90781 00:20:50.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90781) - No such process 00:20:50.942 03:07:36 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 90781 is not found' 00:20:50.942 Process with pid 90781 is not found 00:20:50.942 Remove shared memory files 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:50.942 03:07:36 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:50.942 ************************************ 00:20:50.942 END TEST ftl_restore 00:20:50.942 ************************************ 00:20:50.942 00:20:50.942 real 3m13.332s 00:20:50.942 user 2m59.991s 00:20:50.942 sys 0m14.979s 00:20:50.942 03:07:36 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:50.942 03:07:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:50.942 03:07:36 ftl -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:50.942 03:07:36 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:50.942 03:07:36 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:50.942 03:07:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:50.942 ************************************ 00:20:50.942 START TEST ftl_dirty_shutdown 00:20:50.942 ************************************ 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:50.942 * Looking for test storage... 00:20:50.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92838 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92838 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 92838 ']' 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:50.942 03:07:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.200 [2024-05-14 03:07:37.033537] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:20:51.200 [2024-05-14 03:07:37.033779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92838 ] 00:20:51.200 [2024-05-14 03:07:37.185045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:51.200 [2024-05-14 03:07:37.205724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.458 [2024-05-14 03:07:37.249649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:52.038 03:07:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:52.308 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:52.565 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:52.565 { 00:20:52.565 "name": "nvme0n1", 00:20:52.565 "aliases": [ 00:20:52.565 "87a28aa4-e535-46e3-b271-eb72ea6e2957" 00:20:52.565 ], 00:20:52.565 "product_name": "NVMe disk", 00:20:52.565 "block_size": 4096, 00:20:52.565 "num_blocks": 1310720, 00:20:52.565 "uuid": "87a28aa4-e535-46e3-b271-eb72ea6e2957", 00:20:52.565 "assigned_rate_limits": { 00:20:52.565 "rw_ios_per_sec": 0, 00:20:52.565 "rw_mbytes_per_sec": 0, 00:20:52.565 "r_mbytes_per_sec": 0, 00:20:52.565 "w_mbytes_per_sec": 0 00:20:52.565 }, 00:20:52.565 "claimed": true, 00:20:52.565 "claim_type": "read_many_write_one", 00:20:52.565 "zoned": false, 00:20:52.565 "supported_io_types": { 00:20:52.565 "read": true, 00:20:52.565 "write": true, 00:20:52.565 "unmap": true, 00:20:52.565 "write_zeroes": true, 00:20:52.565 "flush": true, 00:20:52.565 "reset": true, 00:20:52.565 "compare": true, 00:20:52.565 "compare_and_write": false, 00:20:52.565 "abort": true, 00:20:52.565 "nvme_admin": true, 00:20:52.565 "nvme_io": true 00:20:52.565 }, 00:20:52.565 "driver_specific": { 00:20:52.566 "nvme": [ 00:20:52.566 { 00:20:52.566 "pci_address": "0000:00:11.0", 00:20:52.566 "trid": { 00:20:52.566 "trtype": "PCIe", 00:20:52.566 "traddr": "0000:00:11.0" 00:20:52.566 }, 00:20:52.566 "ctrlr_data": { 00:20:52.566 "cntlid": 0, 00:20:52.566 "vendor_id": "0x1b36", 00:20:52.566 "model_number": "QEMU NVMe Ctrl", 00:20:52.566 "serial_number": "12341", 00:20:52.566 "firmware_revision": "8.0.0", 00:20:52.566 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:52.566 "oacs": { 00:20:52.566 "security": 0, 00:20:52.566 "format": 1, 00:20:52.566 "firmware": 0, 00:20:52.566 "ns_manage": 1 00:20:52.566 }, 00:20:52.566 "multi_ctrlr": false, 00:20:52.566 "ana_reporting": false 00:20:52.566 }, 00:20:52.566 "vs": { 00:20:52.566 "nvme_version": "1.4" 00:20:52.566 }, 00:20:52.566 "ns_data": { 00:20:52.566 "id": 1, 00:20:52.566 "can_share": false 00:20:52.566 } 00:20:52.566 } 00:20:52.566 ], 00:20:52.566 "mp_policy": "active_passive" 00:20:52.566 } 00:20:52.566 } 00:20:52.566 ]' 00:20:52.566 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:52.566 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:52.566 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:52.823 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:53.082 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=59b79526-5d7b-49a5-82ef-9332805b74f0 00:20:53.082 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:53.082 03:07:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59b79526-5d7b-49a5-82ef-9332805b74f0 00:20:53.082 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:53.339 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=125bc24c-6605-4a42-8f54-2f1ac6cea61e 00:20:53.339 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 125bc24c-6605-4a42-8f54-2f1ac6cea61e 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:53.598 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:53.856 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:53.856 { 00:20:53.856 "name": "e2197cf9-e598-4e8c-abe8-a1bbc6664fc4", 00:20:53.856 "aliases": [ 00:20:53.856 "lvs/nvme0n1p0" 00:20:53.856 ], 00:20:53.856 "product_name": "Logical Volume", 00:20:53.856 "block_size": 4096, 00:20:53.856 "num_blocks": 26476544, 00:20:53.856 "uuid": "e2197cf9-e598-4e8c-abe8-a1bbc6664fc4", 00:20:53.856 "assigned_rate_limits": { 00:20:53.856 "rw_ios_per_sec": 0, 00:20:53.856 "rw_mbytes_per_sec": 0, 00:20:53.856 "r_mbytes_per_sec": 0, 00:20:53.856 "w_mbytes_per_sec": 0 00:20:53.856 }, 00:20:53.856 "claimed": false, 00:20:53.856 "zoned": false, 00:20:53.856 "supported_io_types": { 00:20:53.856 "read": true, 00:20:53.856 "write": true, 00:20:53.856 "unmap": true, 00:20:53.856 "write_zeroes": true, 00:20:53.856 "flush": false, 00:20:53.856 "reset": true, 00:20:53.856 "compare": false, 00:20:53.856 "compare_and_write": false, 00:20:53.856 "abort": false, 00:20:53.856 "nvme_admin": false, 00:20:53.856 "nvme_io": false 00:20:53.856 }, 00:20:53.856 "driver_specific": { 00:20:53.856 "lvol": { 00:20:53.856 "lvol_store_uuid": "125bc24c-6605-4a42-8f54-2f1ac6cea61e", 00:20:53.856 "base_bdev": "nvme0n1", 00:20:53.856 "thin_provision": true, 00:20:53.856 "num_allocated_clusters": 0, 00:20:53.856 "snapshot": false, 00:20:53.856 "clone": false, 00:20:53.856 "esnap_clone": false 00:20:53.856 } 00:20:53.856 } 00:20:53.856 } 00:20:53.856 ]' 00:20:53.856 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:53.856 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:53.856 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:54.123 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:54.123 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:54.123 03:07:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:54.123 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:54.123 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:54.123 03:07:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:54.385 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:54.641 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:54.642 { 00:20:54.642 "name": "e2197cf9-e598-4e8c-abe8-a1bbc6664fc4", 00:20:54.642 "aliases": [ 00:20:54.642 "lvs/nvme0n1p0" 00:20:54.642 ], 00:20:54.642 "product_name": "Logical Volume", 00:20:54.642 "block_size": 4096, 00:20:54.642 "num_blocks": 26476544, 00:20:54.642 "uuid": "e2197cf9-e598-4e8c-abe8-a1bbc6664fc4", 00:20:54.642 "assigned_rate_limits": { 00:20:54.642 "rw_ios_per_sec": 0, 00:20:54.642 "rw_mbytes_per_sec": 0, 00:20:54.642 "r_mbytes_per_sec": 0, 00:20:54.642 "w_mbytes_per_sec": 0 00:20:54.642 }, 00:20:54.642 "claimed": false, 00:20:54.642 "zoned": false, 00:20:54.642 "supported_io_types": { 00:20:54.642 "read": true, 00:20:54.642 "write": true, 00:20:54.642 "unmap": true, 00:20:54.642 "write_zeroes": true, 00:20:54.642 "flush": false, 00:20:54.642 "reset": true, 00:20:54.642 "compare": false, 00:20:54.642 "compare_and_write": false, 00:20:54.642 "abort": false, 00:20:54.642 "nvme_admin": false, 00:20:54.642 "nvme_io": false 00:20:54.642 }, 00:20:54.642 "driver_specific": { 00:20:54.642 "lvol": { 00:20:54.642 "lvol_store_uuid": "125bc24c-6605-4a42-8f54-2f1ac6cea61e", 00:20:54.642 "base_bdev": "nvme0n1", 00:20:54.642 "thin_provision": true, 00:20:54.642 "num_allocated_clusters": 0, 00:20:54.642 "snapshot": false, 00:20:54.642 "clone": false, 00:20:54.642 "esnap_clone": false 00:20:54.642 } 00:20:54.642 } 00:20:54.642 } 00:20:54.642 ]' 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:54.642 03:07:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:54.899 03:07:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:55.157 { 00:20:55.157 "name": "e2197cf9-e598-4e8c-abe8-a1bbc6664fc4", 00:20:55.157 "aliases": [ 00:20:55.157 "lvs/nvme0n1p0" 00:20:55.157 ], 00:20:55.157 "product_name": "Logical Volume", 00:20:55.157 "block_size": 4096, 00:20:55.157 "num_blocks": 26476544, 00:20:55.157 "uuid": "e2197cf9-e598-4e8c-abe8-a1bbc6664fc4", 00:20:55.157 "assigned_rate_limits": { 00:20:55.157 "rw_ios_per_sec": 0, 00:20:55.157 "rw_mbytes_per_sec": 0, 00:20:55.157 "r_mbytes_per_sec": 0, 00:20:55.157 "w_mbytes_per_sec": 0 00:20:55.157 }, 00:20:55.157 "claimed": false, 00:20:55.157 "zoned": false, 00:20:55.157 "supported_io_types": { 00:20:55.157 "read": true, 00:20:55.157 "write": true, 00:20:55.157 "unmap": true, 00:20:55.157 "write_zeroes": true, 00:20:55.157 "flush": false, 00:20:55.157 "reset": true, 00:20:55.157 "compare": false, 00:20:55.157 "compare_and_write": false, 00:20:55.157 "abort": false, 00:20:55.157 "nvme_admin": false, 00:20:55.157 "nvme_io": false 00:20:55.157 }, 00:20:55.157 "driver_specific": { 00:20:55.157 "lvol": { 00:20:55.157 "lvol_store_uuid": "125bc24c-6605-4a42-8f54-2f1ac6cea61e", 00:20:55.157 "base_bdev": "nvme0n1", 00:20:55.157 "thin_provision": true, 00:20:55.157 "num_allocated_clusters": 0, 00:20:55.157 "snapshot": false, 00:20:55.157 "clone": false, 00:20:55.157 "esnap_clone": false 00:20:55.157 } 00:20:55.157 } 00:20:55.157 } 00:20:55.157 ]' 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:55.157 03:07:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:55.158 03:07:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:55.158 03:07:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 --l2p_dram_limit 10' 00:20:55.158 03:07:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:55.158 03:07:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:55.158 03:07:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:55.158 03:07:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e2197cf9-e598-4e8c-abe8-a1bbc6664fc4 --l2p_dram_limit 10 -c nvc0n1p0 00:20:55.417 [2024-05-14 03:07:41.385275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.385336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.417 [2024-05-14 03:07:41.385368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:55.417 [2024-05-14 03:07:41.385384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.385462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.385494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.417 [2024-05-14 03:07:41.385507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:55.417 [2024-05-14 03:07:41.385531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.385566] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.417 [2024-05-14 03:07:41.385921] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.417 [2024-05-14 03:07:41.385964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.385980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.417 [2024-05-14 03:07:41.385996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:20:55.417 [2024-05-14 03:07:41.386009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.386189] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 21b9fe78-b278-40de-b610-6cc3012e276f 00:20:55.417 [2024-05-14 03:07:41.387278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.387312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:55.417 [2024-05-14 03:07:41.387338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:55.417 [2024-05-14 03:07:41.387350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.392035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.392093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.417 [2024-05-14 03:07:41.392124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:20:55.417 [2024-05-14 03:07:41.392170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.392297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.392318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.417 [2024-05-14 03:07:41.392336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:55.417 [2024-05-14 03:07:41.392351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.392420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.392455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.417 [2024-05-14 03:07:41.392480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.417 [2024-05-14 03:07:41.392492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.392559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.417 [2024-05-14 03:07:41.394206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.394250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.417 [2024-05-14 03:07:41.394270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:20:55.417 [2024-05-14 03:07:41.394285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.394342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.394362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.417 [2024-05-14 03:07:41.394375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.417 [2024-05-14 03:07:41.394389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.394413] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:55.417 [2024-05-14 03:07:41.394544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:55.417 [2024-05-14 03:07:41.394562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.417 [2024-05-14 03:07:41.394578] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:55.417 [2024-05-14 03:07:41.394593] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.417 [2024-05-14 03:07:41.394609] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.417 [2024-05-14 03:07:41.394621] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:55.417 [2024-05-14 03:07:41.394634] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.417 [2024-05-14 03:07:41.394644] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:55.417 [2024-05-14 03:07:41.394659] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:55.417 [2024-05-14 03:07:41.394673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.394686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.417 [2024-05-14 03:07:41.394698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:55.417 [2024-05-14 03:07:41.394740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.394811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.417 [2024-05-14 03:07:41.394831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.417 [2024-05-14 03:07:41.394843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:55.417 [2024-05-14 03:07:41.394856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.417 [2024-05-14 03:07:41.394935] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.417 [2024-05-14 03:07:41.394957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.417 [2024-05-14 03:07:41.394969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.417 [2024-05-14 03:07:41.394981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.417 [2024-05-14 03:07:41.394993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.417 [2024-05-14 03:07:41.395004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.417 [2024-05-14 03:07:41.395014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:55.417 [2024-05-14 03:07:41.395026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.418 [2024-05-14 03:07:41.395037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.418 [2024-05-14 03:07:41.395060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.418 [2024-05-14 03:07:41.395073] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:55.418 [2024-05-14 03:07:41.395083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.418 [2024-05-14 03:07:41.395097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.418 [2024-05-14 03:07:41.395107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:55.418 [2024-05-14 03:07:41.395118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.418 [2024-05-14 03:07:41.395140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:55.418 [2024-05-14 03:07:41.395151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:55.418 [2024-05-14 03:07:41.395200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:55.418 [2024-05-14 03:07:41.395212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.418 [2024-05-14 03:07:41.395234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.418 [2024-05-14 03:07:41.395266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.418 [2024-05-14 03:07:41.395317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.418 [2024-05-14 03:07:41.395349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.418 [2024-05-14 03:07:41.395387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.418 [2024-05-14 03:07:41.395409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.418 [2024-05-14 03:07:41.395420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:55.418 [2024-05-14 03:07:41.395431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.418 [2024-05-14 03:07:41.395441] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.418 [2024-05-14 03:07:41.395454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.418 [2024-05-14 03:07:41.395465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.418 [2024-05-14 03:07:41.395498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.418 [2024-05-14 03:07:41.395514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.418 [2024-05-14 03:07:41.395524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.418 [2024-05-14 03:07:41.395536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.418 [2024-05-14 03:07:41.395547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.418 [2024-05-14 03:07:41.395558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.418 [2024-05-14 03:07:41.395587] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.418 [2024-05-14 03:07:41.395603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.418 [2024-05-14 03:07:41.395617] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:55.418 [2024-05-14 03:07:41.395630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:55.418 [2024-05-14 03:07:41.395642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:55.418 [2024-05-14 03:07:41.395655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:55.418 [2024-05-14 03:07:41.395666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:55.418 [2024-05-14 03:07:41.395680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:55.418 [2024-05-14 03:07:41.395691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:55.418 [2024-05-14 03:07:41.395704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:55.418 [2024-05-14 03:07:41.395716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:55.418 [2024-05-14 03:07:41.395733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:55.418 [2024-05-14 03:07:41.395744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:55.418 [2024-05-14 03:07:41.395758] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:55.418 [2024-05-14 03:07:41.395771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:55.418 [2024-05-14 03:07:41.395784] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.418 [2024-05-14 03:07:41.395797] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.418 [2024-05-14 03:07:41.395815] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.418 [2024-05-14 03:07:41.395827] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.418 [2024-05-14 03:07:41.395840] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.418 [2024-05-14 03:07:41.395852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.418 [2024-05-14 03:07:41.395879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.395891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.418 [2024-05-14 03:07:41.395905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:20:55.418 [2024-05-14 03:07:41.395920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.402696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.402739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.418 [2024-05-14 03:07:41.402761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.715 ms 00:20:55.418 [2024-05-14 03:07:41.402773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.402873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.402891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.418 [2024-05-14 03:07:41.402905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:55.418 [2024-05-14 03:07:41.402915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.412401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.412453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.418 [2024-05-14 03:07:41.412493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.400 ms 00:20:55.418 [2024-05-14 03:07:41.412506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.412610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.412635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.418 [2024-05-14 03:07:41.412668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.418 [2024-05-14 03:07:41.412680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.413081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.413183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.418 [2024-05-14 03:07:41.413206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:55.418 [2024-05-14 03:07:41.413223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.413378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.413405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.418 [2024-05-14 03:07:41.413421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:55.418 [2024-05-14 03:07:41.413433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.419657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.419698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.418 [2024-05-14 03:07:41.419736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:20:55.418 [2024-05-14 03:07:41.419748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.418 [2024-05-14 03:07:41.428971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.418 [2024-05-14 03:07:41.432056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.418 [2024-05-14 03:07:41.432100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.418 [2024-05-14 03:07:41.432142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:20:55.418 [2024-05-14 03:07:41.432162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.676 [2024-05-14 03:07:41.479192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.676 [2024-05-14 03:07:41.479292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:55.676 [2024-05-14 03:07:41.479319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.991 ms 00:20:55.676 [2024-05-14 03:07:41.479334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.676 [2024-05-14 03:07:41.479394] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:20:55.676 [2024-05-14 03:07:41.479420] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:57.576 [2024-05-14 03:07:43.539032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.539215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:57.576 [2024-05-14 03:07:43.539245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2059.637 ms 00:20:57.576 [2024-05-14 03:07:43.539267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.539596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.539639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.576 [2024-05-14 03:07:43.539666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:57.576 [2024-05-14 03:07:43.539681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.543847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.543911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:57.576 [2024-05-14 03:07:43.543930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.115 ms 00:20:57.576 [2024-05-14 03:07:43.543946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.547452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.547516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:57.576 [2024-05-14 03:07:43.547535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:20:57.576 [2024-05-14 03:07:43.547548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.547757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.547782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.576 [2024-05-14 03:07:43.547804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:20:57.576 [2024-05-14 03:07:43.547818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.571498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.571565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:57.576 [2024-05-14 03:07:43.571584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.653 ms 00:20:57.576 [2024-05-14 03:07:43.571598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.576302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.576355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:57.576 [2024-05-14 03:07:43.576385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:20:57.576 [2024-05-14 03:07:43.576404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.578399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.578438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:57.576 [2024-05-14 03:07:43.578470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:20:57.576 [2024-05-14 03:07:43.578482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.582890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.582952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.576 [2024-05-14 03:07:43.582970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:20:57.576 [2024-05-14 03:07:43.582983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.583033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.583055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.576 [2024-05-14 03:07:43.583068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.576 [2024-05-14 03:07:43.583080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.583208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.576 [2024-05-14 03:07:43.583271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.576 [2024-05-14 03:07:43.583329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:57.576 [2024-05-14 03:07:43.583364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.576 [2024-05-14 03:07:43.584628] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2198.850 ms, result 0 00:20:57.576 { 00:20:57.576 "name": "ftl0", 00:20:57.576 "uuid": "21b9fe78-b278-40de-b610-6cc3012e276f" 00:20:57.576 } 00:20:57.834 03:07:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:57.834 03:07:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:58.092 03:07:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:58.092 03:07:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:58.092 03:07:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:58.351 /dev/nbd0 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:58.351 1+0 records in 00:20:58.351 1+0 records out 00:20:58.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511907 s, 8.0 MB/s 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:20:58.351 03:07:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:58.351 [2024-05-14 03:07:44.283178] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:20:58.351 [2024-05-14 03:07:44.283378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92969 ] 00:20:58.610 [2024-05-14 03:07:44.436814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:58.610 [2024-05-14 03:07:44.459785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.610 [2024-05-14 03:07:44.506002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.184  Copying: 172/1024 [MB] (172 MBps) Copying: 331/1024 [MB] (158 MBps) Copying: 493/1024 [MB] (161 MBps) Copying: 660/1024 [MB] (166 MBps) Copying: 822/1024 [MB] (161 MBps) Copying: 984/1024 [MB] (162 MBps) Copying: 1024/1024 [MB] (average 163 MBps) 00:21:05.184 00:21:05.184 03:07:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:07.718 03:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:07.718 [2024-05-14 03:07:53.494186] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:21:07.718 [2024-05-14 03:07:53.494361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93062 ] 00:21:07.718 [2024-05-14 03:07:53.643816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:07.718 [2024-05-14 03:07:53.667378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.718 [2024-05-14 03:07:53.711547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.948  Copying: 16/1024 [MB] (16 MBps) Copying: 31/1024 [MB] (15 MBps) Copying: 42056/1048576 [kB] (9832 kBps) Copying: 56/1024 [MB] (14 MBps) Copying: 71/1024 [MB] (15 MBps) Copying: 85/1024 [MB] (13 MBps) Copying: 98/1024 [MB] (13 MBps) Copying: 112/1024 [MB] (13 MBps) Copying: 126/1024 [MB] (13 MBps) Copying: 141/1024 [MB] (14 MBps) Copying: 156/1024 [MB] (15 MBps) Copying: 172/1024 [MB] (15 MBps) Copying: 188/1024 [MB] (16 MBps) Copying: 204/1024 [MB] (15 MBps) Copying: 220/1024 [MB] (16 MBps) Copying: 236/1024 [MB] (16 MBps) Copying: 253/1024 [MB] (16 MBps) Copying: 268/1024 [MB] (15 MBps) Copying: 284/1024 [MB] (15 MBps) Copying: 299/1024 [MB] (14 MBps) Copying: 315/1024 [MB] (15 MBps) Copying: 330/1024 [MB] (15 MBps) Copying: 345/1024 [MB] (15 MBps) Copying: 360/1024 [MB] (14 MBps) Copying: 375/1024 [MB] (15 MBps) Copying: 390/1024 [MB] (15 MBps) Copying: 405/1024 [MB] (15 MBps) Copying: 421/1024 [MB] (15 MBps) Copying: 436/1024 [MB] (15 MBps) Copying: 452/1024 [MB] (15 MBps) Copying: 467/1024 [MB] (15 MBps) Copying: 483/1024 [MB] (15 MBps) Copying: 498/1024 [MB] (15 MBps) Copying: 513/1024 [MB] (15 MBps) Copying: 529/1024 [MB] (15 MBps) Copying: 544/1024 [MB] (15 MBps) Copying: 560/1024 [MB] (15 MBps) Copying: 576/1024 [MB] (15 MBps) Copying: 591/1024 [MB] (15 MBps) Copying: 607/1024 [MB] (15 MBps) Copying: 623/1024 [MB] (15 MBps) Copying: 638/1024 [MB] (15 MBps) Copying: 653/1024 [MB] (15 MBps) Copying: 669/1024 [MB] (15 MBps) Copying: 684/1024 [MB] (15 MBps) Copying: 700/1024 [MB] (15 MBps) Copying: 715/1024 [MB] (15 MBps) Copying: 731/1024 [MB] (15 MBps) Copying: 746/1024 [MB] (15 MBps) Copying: 762/1024 [MB] (15 MBps) Copying: 777/1024 [MB] (15 MBps) Copying: 793/1024 [MB] (15 MBps) Copying: 808/1024 [MB] (15 MBps) Copying: 824/1024 [MB] (15 MBps) Copying: 840/1024 [MB] (15 MBps) Copying: 855/1024 [MB] (15 MBps) Copying: 870/1024 [MB] (15 MBps) Copying: 886/1024 [MB] (15 MBps) Copying: 902/1024 [MB] (15 MBps) Copying: 917/1024 [MB] (15 MBps) Copying: 933/1024 [MB] (15 MBps) Copying: 949/1024 [MB] (15 MBps) Copying: 965/1024 [MB] (15 MBps) Copying: 980/1024 [MB] (15 MBps) Copying: 995/1024 [MB] (15 MBps) Copying: 1011/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:22:14.948 00:22:14.948 03:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:14.948 03:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:15.207 03:09:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:15.468 [2024-05-14 03:09:01.266807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.266861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:15.468 [2024-05-14 03:09:01.266900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:15.468 [2024-05-14 03:09:01.266912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.266946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:15.468 [2024-05-14 03:09:01.267489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.267521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:15.468 [2024-05-14 03:09:01.267535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:22:15.468 [2024-05-14 03:09:01.267550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.269426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.269504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:15.468 [2024-05-14 03:09:01.269521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:22:15.468 [2024-05-14 03:09:01.269533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.285491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.285562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:15.468 [2024-05-14 03:09:01.285581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.935 ms 00:22:15.468 [2024-05-14 03:09:01.285595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.291565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.291616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:15.468 [2024-05-14 03:09:01.291632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.927 ms 00:22:15.468 [2024-05-14 03:09:01.291644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.293063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.293126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:15.468 [2024-05-14 03:09:01.293142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:22:15.468 [2024-05-14 03:09:01.293193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.297707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.297770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:15.468 [2024-05-14 03:09:01.297794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:22:15.468 [2024-05-14 03:09:01.297817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.297939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.297965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:15.468 [2024-05-14 03:09:01.297979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:15.468 [2024-05-14 03:09:01.297991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.299696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.299754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:15.468 [2024-05-14 03:09:01.299769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:22:15.468 [2024-05-14 03:09:01.299781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.301366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.301422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:15.468 [2024-05-14 03:09:01.301438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:22:15.468 [2024-05-14 03:09:01.301450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.302720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.302823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:15.468 [2024-05-14 03:09:01.302839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:22:15.468 [2024-05-14 03:09:01.302851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.304050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.468 [2024-05-14 03:09:01.304289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:15.468 [2024-05-14 03:09:01.304445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:22:15.468 [2024-05-14 03:09:01.304502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.468 [2024-05-14 03:09:01.304579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:15.468 [2024-05-14 03:09:01.304612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:15.468 [2024-05-14 03:09:01.304919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.304931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.304942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.304986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.304998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.305991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.306004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:15.469 [2024-05-14 03:09:01.306027] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:15.469 [2024-05-14 03:09:01.306047] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21b9fe78-b278-40de-b610-6cc3012e276f 00:22:15.469 [2024-05-14 03:09:01.306061] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:15.470 [2024-05-14 03:09:01.306073] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:15.470 [2024-05-14 03:09:01.306087] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:15.470 [2024-05-14 03:09:01.306099] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:15.470 [2024-05-14 03:09:01.306111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:15.470 [2024-05-14 03:09:01.306122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:15.470 [2024-05-14 03:09:01.306152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:15.470 [2024-05-14 03:09:01.306178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:15.470 [2024-05-14 03:09:01.306190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:15.470 [2024-05-14 03:09:01.306201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.470 [2024-05-14 03:09:01.306215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:15.470 [2024-05-14 03:09:01.306230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:22:15.470 [2024-05-14 03:09:01.306279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.307680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.470 [2024-05-14 03:09:01.307712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:15.470 [2024-05-14 03:09:01.307726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:22:15.470 [2024-05-14 03:09:01.307738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.307798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.470 [2024-05-14 03:09:01.307825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:15.470 [2024-05-14 03:09:01.307838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:15.470 [2024-05-14 03:09:01.307850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.313082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.313121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.470 [2024-05-14 03:09:01.313134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.313146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.313238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.313263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.470 [2024-05-14 03:09:01.313274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.313286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.313399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.313423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.470 [2024-05-14 03:09:01.313436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.313447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.313504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.313522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.470 [2024-05-14 03:09:01.313537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.313550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.322587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.322675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.470 [2024-05-14 03:09:01.322694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.322707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.326373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.326437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.470 [2024-05-14 03:09:01.326454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.326466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.326550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.326572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.470 [2024-05-14 03:09:01.326592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.326609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.326663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.326681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.470 [2024-05-14 03:09:01.326692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.326717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.326801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.326825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.470 [2024-05-14 03:09:01.326837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.326848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.326894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.326921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:15.470 [2024-05-14 03:09:01.326933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.326944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.327020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.327047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.470 [2024-05-14 03:09:01.327059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.327070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.327189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.470 [2024-05-14 03:09:01.327215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.470 [2024-05-14 03:09:01.327228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.470 [2024-05-14 03:09:01.327259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.470 [2024-05-14 03:09:01.327442] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.588 ms, result 0 00:22:15.470 true 00:22:15.470 03:09:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92838 00:22:15.470 03:09:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92838 00:22:15.470 03:09:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:15.470 [2024-05-14 03:09:01.447192] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:22:15.470 [2024-05-14 03:09:01.447371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93751 ] 00:22:15.730 [2024-05-14 03:09:01.596715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.730 [2024-05-14 03:09:01.615749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.730 [2024-05-14 03:09:01.650889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.174  Copying: 164/1024 [MB] (164 MBps) Copying: 371/1024 [MB] (206 MBps) Copying: 578/1024 [MB] (207 MBps) Copying: 784/1024 [MB] (205 MBps) Copying: 991/1024 [MB] (207 MBps) Copying: 1024/1024 [MB] (average 198 MBps) 00:22:21.174 00:22:21.174 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92838 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:21.174 03:09:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:21.174 [2024-05-14 03:09:07.152553] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:22:21.174 [2024-05-14 03:09:07.152753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93810 ] 00:22:21.432 [2024-05-14 03:09:07.301136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.432 [2024-05-14 03:09:07.320792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.432 [2024-05-14 03:09:07.356074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.432 [2024-05-14 03:09:07.438283] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.432 [2024-05-14 03:09:07.438378] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.690 [2024-05-14 03:09:07.499346] blobstore.c:4805:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:21.690 [2024-05-14 03:09:07.499702] blobstore.c:4752:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:21.690 [2024-05-14 03:09:07.500010] blobstore.c:4752:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:21.951 [2024-05-14 03:09:07.777866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.777928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.951 [2024-05-14 03:09:07.777973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:21.951 [2024-05-14 03:09:07.777983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.778057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.778076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.951 [2024-05-14 03:09:07.778110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:21.951 [2024-05-14 03:09:07.778119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.778147] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.951 [2024-05-14 03:09:07.778451] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.951 [2024-05-14 03:09:07.778477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.778492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.951 [2024-05-14 03:09:07.778504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:22:21.951 [2024-05-14 03:09:07.778514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.779795] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:21.951 [2024-05-14 03:09:07.782181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.782271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:21.951 [2024-05-14 03:09:07.782304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.388 ms 00:22:21.951 [2024-05-14 03:09:07.782316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.782397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.782416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:21.951 [2024-05-14 03:09:07.782432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:21.951 [2024-05-14 03:09:07.782452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.787268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.787308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.951 [2024-05-14 03:09:07.787338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:22:21.951 [2024-05-14 03:09:07.787377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.787493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.787511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.951 [2024-05-14 03:09:07.787523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:22:21.951 [2024-05-14 03:09:07.787533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.787610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.787627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.951 [2024-05-14 03:09:07.787642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:21.951 [2024-05-14 03:09:07.787668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.787733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.951 [2024-05-14 03:09:07.789441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.789475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.951 [2024-05-14 03:09:07.789489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:22:21.951 [2024-05-14 03:09:07.789504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.789545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.951 [2024-05-14 03:09:07.789574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.951 [2024-05-14 03:09:07.789586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:21.951 [2024-05-14 03:09:07.789595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.951 [2024-05-14 03:09:07.789643] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:21.951 [2024-05-14 03:09:07.789694] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:21.951 [2024-05-14 03:09:07.789750] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:21.951 [2024-05-14 03:09:07.789782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:21.952 [2024-05-14 03:09:07.789859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:21.952 [2024-05-14 03:09:07.789874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.952 [2024-05-14 03:09:07.789886] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:21.952 [2024-05-14 03:09:07.789899] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.952 [2024-05-14 03:09:07.789911] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.952 [2024-05-14 03:09:07.789922] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:21.952 [2024-05-14 03:09:07.790009] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.952 [2024-05-14 03:09:07.790033] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:21.952 [2024-05-14 03:09:07.790065] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:21.952 [2024-05-14 03:09:07.790081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.952 [2024-05-14 03:09:07.790091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.952 [2024-05-14 03:09:07.790100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:22:21.952 [2024-05-14 03:09:07.790109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.952 [2024-05-14 03:09:07.790174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.952 [2024-05-14 03:09:07.790201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.952 [2024-05-14 03:09:07.790215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:21.952 [2024-05-14 03:09:07.790224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.952 [2024-05-14 03:09:07.790307] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.952 [2024-05-14 03:09:07.790326] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.952 [2024-05-14 03:09:07.790336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.952 [2024-05-14 03:09:07.790364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.952 [2024-05-14 03:09:07.790390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.952 [2024-05-14 03:09:07.790407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.952 [2024-05-14 03:09:07.790426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:21.952 [2024-05-14 03:09:07.790440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.952 [2024-05-14 03:09:07.790449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.952 [2024-05-14 03:09:07.790458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:21.952 [2024-05-14 03:09:07.790466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.952 [2024-05-14 03:09:07.790483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:21.952 [2024-05-14 03:09:07.790492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:21.952 [2024-05-14 03:09:07.790508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:21.952 [2024-05-14 03:09:07.790532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.952 [2024-05-14 03:09:07.790549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.952 [2024-05-14 03:09:07.790575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.952 [2024-05-14 03:09:07.790611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.952 [2024-05-14 03:09:07.790636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.952 [2024-05-14 03:09:07.790693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.952 [2024-05-14 03:09:07.790710] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.952 [2024-05-14 03:09:07.790719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:21.952 [2024-05-14 03:09:07.790728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.952 [2024-05-14 03:09:07.790737] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.952 [2024-05-14 03:09:07.790747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.952 [2024-05-14 03:09:07.790764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.952 [2024-05-14 03:09:07.790787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.952 [2024-05-14 03:09:07.790798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.952 [2024-05-14 03:09:07.790807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.952 [2024-05-14 03:09:07.790816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.952 [2024-05-14 03:09:07.790828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.952 [2024-05-14 03:09:07.790837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.952 [2024-05-14 03:09:07.790847] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.952 [2024-05-14 03:09:07.790860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.952 [2024-05-14 03:09:07.790871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:21.952 [2024-05-14 03:09:07.790881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:21.952 [2024-05-14 03:09:07.790891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:21.952 [2024-05-14 03:09:07.790901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:21.952 [2024-05-14 03:09:07.790911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:21.952 [2024-05-14 03:09:07.790921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:21.952 [2024-05-14 03:09:07.790931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:21.952 [2024-05-14 03:09:07.790941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:21.952 [2024-05-14 03:09:07.790953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:21.952 [2024-05-14 03:09:07.790964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:21.952 [2024-05-14 03:09:07.790974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:21.952 [2024-05-14 03:09:07.790985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:21.952 [2024-05-14 03:09:07.790995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:21.952 [2024-05-14 03:09:07.791005] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.952 [2024-05-14 03:09:07.791019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.952 [2024-05-14 03:09:07.791030] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.952 [2024-05-14 03:09:07.791055] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.952 [2024-05-14 03:09:07.791065] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.952 [2024-05-14 03:09:07.791075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.952 [2024-05-14 03:09:07.791086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.952 [2024-05-14 03:09:07.791111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.952 [2024-05-14 03:09:07.791122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:22:21.952 [2024-05-14 03:09:07.791142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.952 [2024-05-14 03:09:07.797896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.952 [2024-05-14 03:09:07.797952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.952 [2024-05-14 03:09:07.797984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:22:21.952 [2024-05-14 03:09:07.797994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.952 [2024-05-14 03:09:07.798129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.952 [2024-05-14 03:09:07.798165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:21.952 [2024-05-14 03:09:07.798176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:22:21.952 [2024-05-14 03:09:07.798188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.952 [2024-05-14 03:09:07.816210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.952 [2024-05-14 03:09:07.816281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.953 [2024-05-14 03:09:07.816302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.897 ms 00:22:21.953 [2024-05-14 03:09:07.816314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.816409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.816438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.953 [2024-05-14 03:09:07.816465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.953 [2024-05-14 03:09:07.816476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.816913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.816933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.953 [2024-05-14 03:09:07.816957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:22:21.953 [2024-05-14 03:09:07.816968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.817187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.817212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.953 [2024-05-14 03:09:07.817242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:22:21.953 [2024-05-14 03:09:07.817254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.824281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.824351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.953 [2024-05-14 03:09:07.824381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:22:21.953 [2024-05-14 03:09:07.824402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.827932] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:21.953 [2024-05-14 03:09:07.828012] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:21.953 [2024-05-14 03:09:07.828064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.828094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:21.953 [2024-05-14 03:09:07.828169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.362 ms 00:22:21.953 [2024-05-14 03:09:07.828209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.848535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.848628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:21.953 [2024-05-14 03:09:07.848659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.233 ms 00:22:21.953 [2024-05-14 03:09:07.848683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.851316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.851382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:21.953 [2024-05-14 03:09:07.851396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:22:21.953 [2024-05-14 03:09:07.851406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.853218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.853298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:21.953 [2024-05-14 03:09:07.853312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:22:21.953 [2024-05-14 03:09:07.853322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.853581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.853626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:21.953 [2024-05-14 03:09:07.853663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:22:21.953 [2024-05-14 03:09:07.853689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.873682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.873771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:21.953 [2024-05-14 03:09:07.873791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.968 ms 00:22:21.953 [2024-05-14 03:09:07.873801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.881575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:21.953 [2024-05-14 03:09:07.884437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.884514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.953 [2024-05-14 03:09:07.884535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.548 ms 00:22:21.953 [2024-05-14 03:09:07.884547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.884657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.884676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.953 [2024-05-14 03:09:07.884691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:21.953 [2024-05-14 03:09:07.884701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.884818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.884838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.953 [2024-05-14 03:09:07.884855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:21.953 [2024-05-14 03:09:07.884865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.886779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.886835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:21.953 [2024-05-14 03:09:07.886849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:22:21.953 [2024-05-14 03:09:07.886866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.886901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.886915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.953 [2024-05-14 03:09:07.886925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:21.953 [2024-05-14 03:09:07.886935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.886976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.953 [2024-05-14 03:09:07.886991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.887001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.953 [2024-05-14 03:09:07.887011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:21.953 [2024-05-14 03:09:07.887021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.890672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.890724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.953 [2024-05-14 03:09:07.890747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:22:21.953 [2024-05-14 03:09:07.890767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.890848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.953 [2024-05-14 03:09:07.890866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.953 [2024-05-14 03:09:07.890877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:21.953 [2024-05-14 03:09:07.890886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.953 [2024-05-14 03:09:07.892137] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.691 ms, result 0 00:23:07.864  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (22 MBps) Copying: 68/1024 [MB] (23 MBps) Copying: 91/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (23 MBps) Copying: 137/1024 [MB] (22 MBps) Copying: 159/1024 [MB] (22 MBps) Copying: 181/1024 [MB] (22 MBps) Copying: 204/1024 [MB] (22 MBps) Copying: 227/1024 [MB] (22 MBps) Copying: 250/1024 [MB] (22 MBps) Copying: 272/1024 [MB] (22 MBps) Copying: 295/1024 [MB] (22 MBps) Copying: 318/1024 [MB] (22 MBps) Copying: 340/1024 [MB] (22 MBps) Copying: 363/1024 [MB] (23 MBps) Copying: 386/1024 [MB] (22 MBps) Copying: 408/1024 [MB] (22 MBps) Copying: 432/1024 [MB] (23 MBps) Copying: 455/1024 [MB] (23 MBps) Copying: 479/1024 [MB] (23 MBps) Copying: 502/1024 [MB] (23 MBps) Copying: 525/1024 [MB] (23 MBps) Copying: 549/1024 [MB] (23 MBps) Copying: 573/1024 [MB] (23 MBps) Copying: 596/1024 [MB] (23 MBps) Copying: 619/1024 [MB] (23 MBps) Copying: 642/1024 [MB] (23 MBps) Copying: 666/1024 [MB] (23 MBps) Copying: 689/1024 [MB] (23 MBps) Copying: 713/1024 [MB] (23 MBps) Copying: 736/1024 [MB] (23 MBps) Copying: 759/1024 [MB] (23 MBps) Copying: 782/1024 [MB] (22 MBps) Copying: 805/1024 [MB] (22 MBps) Copying: 827/1024 [MB] (22 MBps) Copying: 849/1024 [MB] (22 MBps) Copying: 872/1024 [MB] (22 MBps) Copying: 894/1024 [MB] (22 MBps) Copying: 917/1024 [MB] (22 MBps) Copying: 939/1024 [MB] (22 MBps) Copying: 962/1024 [MB] (22 MBps) Copying: 985/1024 [MB] (22 MBps) Copying: 1008/1024 [MB] (22 MBps) Copying: 1023/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-05-14 03:09:53.728367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.728477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.864 [2024-05-14 03:09:53.728515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:07.864 [2024-05-14 03:09:53.728526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.864 [2024-05-14 03:09:53.729623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.864 [2024-05-14 03:09:53.732297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.732340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.864 [2024-05-14 03:09:53.732356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:23:07.864 [2024-05-14 03:09:53.732368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.864 [2024-05-14 03:09:53.744465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.744521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.864 [2024-05-14 03:09:53.744565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.003 ms 00:23:07.864 [2024-05-14 03:09:53.744575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.864 [2024-05-14 03:09:53.765373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.765428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.864 [2024-05-14 03:09:53.765472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.778 ms 00:23:07.864 [2024-05-14 03:09:53.765482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.864 [2024-05-14 03:09:53.770900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.770937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:07.864 [2024-05-14 03:09:53.770965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.381 ms 00:23:07.864 [2024-05-14 03:09:53.770975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.864 [2024-05-14 03:09:53.772285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.772337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.864 [2024-05-14 03:09:53.772351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:23:07.864 [2024-05-14 03:09:53.772360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.864 [2024-05-14 03:09:53.775568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.864 [2024-05-14 03:09:53.775618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.864 [2024-05-14 03:09:53.775631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:23:07.864 [2024-05-14 03:09:53.775656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.125 [2024-05-14 03:09:53.901383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.125 [2024-05-14 03:09:53.901463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:08.125 [2024-05-14 03:09:53.901512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.692 ms 00:23:08.125 [2024-05-14 03:09:53.901522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.125 [2024-05-14 03:09:53.903443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.125 [2024-05-14 03:09:53.903490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:08.125 [2024-05-14 03:09:53.903519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:23:08.125 [2024-05-14 03:09:53.903528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.125 [2024-05-14 03:09:53.905093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.125 [2024-05-14 03:09:53.905155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:08.125 [2024-05-14 03:09:53.905200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:23:08.125 [2024-05-14 03:09:53.905210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.125 [2024-05-14 03:09:53.906380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.125 [2024-05-14 03:09:53.906447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:08.125 [2024-05-14 03:09:53.906459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:23:08.125 [2024-05-14 03:09:53.906468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.125 [2024-05-14 03:09:53.907556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.125 [2024-05-14 03:09:53.907607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:08.125 [2024-05-14 03:09:53.907619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:23:08.125 [2024-05-14 03:09:53.907628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.125 [2024-05-14 03:09:53.907660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:08.125 [2024-05-14 03:09:53.907680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:23:08.125 [2024-05-14 03:09:53.907691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.907993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:08.125 [2024-05-14 03:09:53.908117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:08.126 [2024-05-14 03:09:53.908828] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:08.126 [2024-05-14 03:09:53.908838] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21b9fe78-b278-40de-b610-6cc3012e276f 00:23:08.126 [2024-05-14 03:09:53.908848] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:23:08.126 [2024-05-14 03:09:53.908858] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:23:08.126 [2024-05-14 03:09:53.908866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:23:08.126 [2024-05-14 03:09:53.908891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:23:08.126 [2024-05-14 03:09:53.908900] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:08.126 [2024-05-14 03:09:53.908909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:08.126 [2024-05-14 03:09:53.908918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:08.126 [2024-05-14 03:09:53.908927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:08.126 [2024-05-14 03:09:53.908935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:08.126 [2024-05-14 03:09:53.908945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.126 [2024-05-14 03:09:53.908954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:08.126 [2024-05-14 03:09:53.908964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:23:08.126 [2024-05-14 03:09:53.908973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.910223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.126 [2024-05-14 03:09:53.910274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:08.126 [2024-05-14 03:09:53.910299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:23:08.126 [2024-05-14 03:09:53.910308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.910372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.126 [2024-05-14 03:09:53.910389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:08.126 [2024-05-14 03:09:53.910401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:08.126 [2024-05-14 03:09:53.910411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.914980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.126 [2024-05-14 03:09:53.915027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:08.126 [2024-05-14 03:09:53.915039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.126 [2024-05-14 03:09:53.915059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.915110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.126 [2024-05-14 03:09:53.915123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:08.126 [2024-05-14 03:09:53.915133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.126 [2024-05-14 03:09:53.915141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.915219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.126 [2024-05-14 03:09:53.915242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:08.126 [2024-05-14 03:09:53.915252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.126 [2024-05-14 03:09:53.915275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.915310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.126 [2024-05-14 03:09:53.915321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:08.126 [2024-05-14 03:09:53.915331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.126 [2024-05-14 03:09:53.915340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.922945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.126 [2024-05-14 03:09:53.923007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.126 [2024-05-14 03:09:53.923022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.126 [2024-05-14 03:09:53.923041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.126 [2024-05-14 03:09:53.926554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.926605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.127 [2024-05-14 03:09:53.926635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.926644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.926674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.926686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.127 [2024-05-14 03:09:53.926704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.926713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.926760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.926783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.127 [2024-05-14 03:09:53.926793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.926802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.926923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.926946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.127 [2024-05-14 03:09:53.926957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.926969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.927018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.927034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:08.127 [2024-05-14 03:09:53.927044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.927054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.927094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.927106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.127 [2024-05-14 03:09:53.927116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.927140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.927185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.127 [2024-05-14 03:09:53.927217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.127 [2024-05-14 03:09:53.927227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.127 [2024-05-14 03:09:53.927236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.127 [2024-05-14 03:09:53.927373] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 201.769 ms, result 0 00:23:08.695 00:23:08.695 00:23:08.695 03:09:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:10.601 03:09:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:10.601 [2024-05-14 03:09:56.428957] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:23:10.601 [2024-05-14 03:09:56.429091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94296 ] 00:23:10.601 [2024-05-14 03:09:56.562358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:10.601 [2024-05-14 03:09:56.586387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.860 [2024-05-14 03:09:56.632100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.860 [2024-05-14 03:09:56.729613] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.860 [2024-05-14 03:09:56.729728] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.860 [2024-05-14 03:09:56.881624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.860 [2024-05-14 03:09:56.881708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:10.860 [2024-05-14 03:09:56.881758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:10.860 [2024-05-14 03:09:56.881771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.860 [2024-05-14 03:09:56.881848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.860 [2024-05-14 03:09:56.881868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.860 [2024-05-14 03:09:56.881881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:10.860 [2024-05-14 03:09:56.881892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.860 [2024-05-14 03:09:56.881943] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:10.860 [2024-05-14 03:09:56.882281] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:10.860 [2024-05-14 03:09:56.882324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.860 [2024-05-14 03:09:56.882338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.860 [2024-05-14 03:09:56.882358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:23:10.860 [2024-05-14 03:09:56.882392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.860 [2024-05-14 03:09:56.883618] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:10.860 [2024-05-14 03:09:56.886016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.860 [2024-05-14 03:09:56.886074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:10.860 [2024-05-14 03:09:56.886090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:23:10.860 [2024-05-14 03:09:56.886109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.860 [2024-05-14 03:09:56.886201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.860 [2024-05-14 03:09:56.886222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:10.860 [2024-05-14 03:09:56.886235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:10.860 [2024-05-14 03:09:56.886247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.891153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.891239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.120 [2024-05-14 03:09:56.891276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:23:11.120 [2024-05-14 03:09:56.891287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.891376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.891398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.120 [2024-05-14 03:09:56.891411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:11.120 [2024-05-14 03:09:56.891421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.891522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.891540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:11.120 [2024-05-14 03:09:56.891553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:11.120 [2024-05-14 03:09:56.891586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.891624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:11.120 [2024-05-14 03:09:56.892993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.893038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.120 [2024-05-14 03:09:56.893068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:23:11.120 [2024-05-14 03:09:56.893077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.893111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.893126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:11.120 [2024-05-14 03:09:56.893138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:11.120 [2024-05-14 03:09:56.893167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.893195] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:11.120 [2024-05-14 03:09:56.893221] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:11.120 [2024-05-14 03:09:56.893257] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:11.120 [2024-05-14 03:09:56.893313] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:11.120 [2024-05-14 03:09:56.893390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:11.120 [2024-05-14 03:09:56.893406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:11.120 [2024-05-14 03:09:56.893428] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:11.120 [2024-05-14 03:09:56.893442] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:11.120 [2024-05-14 03:09:56.893455] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:11.120 [2024-05-14 03:09:56.893467] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:11.120 [2024-05-14 03:09:56.893477] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:11.120 [2024-05-14 03:09:56.893487] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:11.120 [2024-05-14 03:09:56.893507] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:11.120 [2024-05-14 03:09:56.893519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.893530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:11.120 [2024-05-14 03:09:56.893541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:23:11.120 [2024-05-14 03:09:56.893551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.893621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.120 [2024-05-14 03:09:56.893635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:11.120 [2024-05-14 03:09:56.893647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:11.120 [2024-05-14 03:09:56.893657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.120 [2024-05-14 03:09:56.893747] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:11.120 [2024-05-14 03:09:56.893779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:11.120 [2024-05-14 03:09:56.893794] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.120 [2024-05-14 03:09:56.893806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.120 [2024-05-14 03:09:56.893817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:11.120 [2024-05-14 03:09:56.893827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:11.120 [2024-05-14 03:09:56.893837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:11.121 [2024-05-14 03:09:56.893851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:11.121 [2024-05-14 03:09:56.893862] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:11.121 [2024-05-14 03:09:56.893871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.121 [2024-05-14 03:09:56.893881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:11.121 [2024-05-14 03:09:56.893891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:11.121 [2024-05-14 03:09:56.893901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.121 [2024-05-14 03:09:56.893911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:11.121 [2024-05-14 03:09:56.893921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:11.121 [2024-05-14 03:09:56.893942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.121 [2024-05-14 03:09:56.893953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:11.121 [2024-05-14 03:09:56.893966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:11.121 [2024-05-14 03:09:56.893977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.121 [2024-05-14 03:09:56.893987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:11.121 [2024-05-14 03:09:56.893997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:11.121 [2024-05-14 03:09:56.894007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:11.121 [2024-05-14 03:09:56.894027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:11.121 [2024-05-14 03:09:56.894037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:11.121 [2024-05-14 03:09:56.894057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:11.121 [2024-05-14 03:09:56.894067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:11.121 [2024-05-14 03:09:56.894086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:11.121 [2024-05-14 03:09:56.894096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:11.121 [2024-05-14 03:09:56.894115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:11.121 [2024-05-14 03:09:56.894164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:11.121 [2024-05-14 03:09:56.894207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:11.121 [2024-05-14 03:09:56.894217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.121 [2024-05-14 03:09:56.894228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:11.121 [2024-05-14 03:09:56.894240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:11.121 [2024-05-14 03:09:56.894253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.121 [2024-05-14 03:09:56.894263] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:11.121 [2024-05-14 03:09:56.894296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:11.121 [2024-05-14 03:09:56.894308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.121 [2024-05-14 03:09:56.894342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:11.121 [2024-05-14 03:09:56.894353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:11.121 [2024-05-14 03:09:56.894364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:11.121 [2024-05-14 03:09:56.894375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:11.121 [2024-05-14 03:09:56.894385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:11.121 [2024-05-14 03:09:56.894401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:11.121 [2024-05-14 03:09:56.894414] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:11.121 [2024-05-14 03:09:56.894437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.121 [2024-05-14 03:09:56.894450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:11.121 [2024-05-14 03:09:56.894462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:11.121 [2024-05-14 03:09:56.894474] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:11.121 [2024-05-14 03:09:56.894491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:11.121 [2024-05-14 03:09:56.894502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:11.121 [2024-05-14 03:09:56.894529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:11.121 [2024-05-14 03:09:56.894540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:11.121 [2024-05-14 03:09:56.894567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:11.121 [2024-05-14 03:09:56.894578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:11.121 [2024-05-14 03:09:56.894589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:11.121 [2024-05-14 03:09:56.894600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:11.121 [2024-05-14 03:09:56.894612] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:11.121 [2024-05-14 03:09:56.894623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:11.121 [2024-05-14 03:09:56.894638] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:11.121 [2024-05-14 03:09:56.894651] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.121 [2024-05-14 03:09:56.894663] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:11.121 [2024-05-14 03:09:56.894674] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:11.121 [2024-05-14 03:09:56.894685] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:11.121 [2024-05-14 03:09:56.894697] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:11.121 [2024-05-14 03:09:56.894712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.894739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:11.121 [2024-05-14 03:09:56.894751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:23:11.121 [2024-05-14 03:09:56.894765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.900823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.900881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.121 [2024-05-14 03:09:56.900898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.008 ms 00:23:11.121 [2024-05-14 03:09:56.900909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.900992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.901007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:11.121 [2024-05-14 03:09:56.901018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:11.121 [2024-05-14 03:09:56.901028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.922064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.922159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.121 [2024-05-14 03:09:56.922185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.967 ms 00:23:11.121 [2024-05-14 03:09:56.922202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.922267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.922297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.121 [2024-05-14 03:09:56.922317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:11.121 [2024-05-14 03:09:56.922336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.922792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.922830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.121 [2024-05-14 03:09:56.922856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:23:11.121 [2024-05-14 03:09:56.922870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.923073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.923108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.121 [2024-05-14 03:09:56.923126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:23:11.121 [2024-05-14 03:09:56.923164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.929061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.929115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.121 [2024-05-14 03:09:56.929167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.853 ms 00:23:11.121 [2024-05-14 03:09:56.929187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.931459] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:11.121 [2024-05-14 03:09:56.931515] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:11.121 [2024-05-14 03:09:56.931552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.121 [2024-05-14 03:09:56.931564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:11.121 [2024-05-14 03:09:56.931575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.248 ms 00:23:11.121 [2024-05-14 03:09:56.931585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.121 [2024-05-14 03:09:56.945223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.945277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:11.122 [2024-05-14 03:09:56.945309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.595 ms 00:23:11.122 [2024-05-14 03:09:56.945320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.947234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.947285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:11.122 [2024-05-14 03:09:56.947316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:23:11.122 [2024-05-14 03:09:56.947326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.949069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.949173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:11.122 [2024-05-14 03:09:56.949191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:23:11.122 [2024-05-14 03:09:56.949202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.949445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.949475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:11.122 [2024-05-14 03:09:56.949517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:23:11.122 [2024-05-14 03:09:56.949528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.966844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.966918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:11.122 [2024-05-14 03:09:56.966938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.287 ms 00:23:11.122 [2024-05-14 03:09:56.966963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.974006] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:11.122 [2024-05-14 03:09:56.976283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.976334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:11.122 [2024-05-14 03:09:56.976350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.264 ms 00:23:11.122 [2024-05-14 03:09:56.976361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.976444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.976478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:11.122 [2024-05-14 03:09:56.976490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:11.122 [2024-05-14 03:09:56.976500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.977677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.977743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:11.122 [2024-05-14 03:09:56.977758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:23:11.122 [2024-05-14 03:09:56.977774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.979678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.979744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:11.122 [2024-05-14 03:09:56.979791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:23:11.122 [2024-05-14 03:09:56.979802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.979851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.979878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:11.122 [2024-05-14 03:09:56.979891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:11.122 [2024-05-14 03:09:56.979907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.979947] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:11.122 [2024-05-14 03:09:56.979963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.979974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:11.122 [2024-05-14 03:09:56.980000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:11.122 [2024-05-14 03:09:56.980027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.983602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.983655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:11.122 [2024-05-14 03:09:56.983687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.548 ms 00:23:11.122 [2024-05-14 03:09:56.983706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.983788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.122 [2024-05-14 03:09:56.983806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:11.122 [2024-05-14 03:09:56.983818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:11.122 [2024-05-14 03:09:56.983828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.122 [2024-05-14 03:09:56.991518] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.188 ms, result 0 00:23:50.224  Copying: 944/1048576 [kB] (944 kBps) Copying: 5380/1048576 [kB] (4436 kBps) Copying: 31/1024 [MB] (26 MBps) Copying: 59/1024 [MB] (28 MBps) Copying: 88/1024 [MB] (28 MBps) Copying: 116/1024 [MB] (28 MBps) Copying: 144/1024 [MB] (28 MBps) Copying: 172/1024 [MB] (27 MBps) Copying: 200/1024 [MB] (27 MBps) Copying: 228/1024 [MB] (27 MBps) Copying: 256/1024 [MB] (28 MBps) Copying: 283/1024 [MB] (27 MBps) Copying: 311/1024 [MB] (27 MBps) Copying: 338/1024 [MB] (27 MBps) Copying: 366/1024 [MB] (27 MBps) Copying: 394/1024 [MB] (27 MBps) Copying: 421/1024 [MB] (27 MBps) Copying: 449/1024 [MB] (27 MBps) Copying: 476/1024 [MB] (27 MBps) Copying: 505/1024 [MB] (28 MBps) Copying: 533/1024 [MB] (28 MBps) Copying: 561/1024 [MB] (28 MBps) Copying: 589/1024 [MB] (28 MBps) Copying: 617/1024 [MB] (27 MBps) Copying: 645/1024 [MB] (28 MBps) Copying: 673/1024 [MB] (28 MBps) Copying: 701/1024 [MB] (28 MBps) Copying: 729/1024 [MB] (28 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 785/1024 [MB] (28 MBps) Copying: 814/1024 [MB] (28 MBps) Copying: 842/1024 [MB] (28 MBps) Copying: 870/1024 [MB] (28 MBps) Copying: 898/1024 [MB] (28 MBps) Copying: 927/1024 [MB] (28 MBps) Copying: 955/1024 [MB] (28 MBps) Copying: 983/1024 [MB] (27 MBps) Copying: 1010/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-14 03:10:36.059685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.059815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:50.224 [2024-05-14 03:10:36.059869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:50.224 [2024-05-14 03:10:36.059893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.059968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:50.224 [2024-05-14 03:10:36.060707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.060742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:50.224 [2024-05-14 03:10:36.060770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:23:50.224 [2024-05-14 03:10:36.060789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.061232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.061280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:50.224 [2024-05-14 03:10:36.061309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:23:50.224 [2024-05-14 03:10:36.061325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.076050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.076128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:50.224 [2024-05-14 03:10:36.076214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.693 ms 00:23:50.224 [2024-05-14 03:10:36.076231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.084739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.084787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:50.224 [2024-05-14 03:10:36.084808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.402 ms 00:23:50.224 [2024-05-14 03:10:36.084835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.086435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.086485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:50.224 [2024-05-14 03:10:36.086506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:23:50.224 [2024-05-14 03:10:36.086521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.089411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.089463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:50.224 [2024-05-14 03:10:36.089483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:23:50.224 [2024-05-14 03:10:36.089499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.093050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.093106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:50.224 [2024-05-14 03:10:36.093155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:23:50.224 [2024-05-14 03:10:36.093173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.094865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.094917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:50.224 [2024-05-14 03:10:36.094936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:23:50.224 [2024-05-14 03:10:36.094951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.096434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.224 [2024-05-14 03:10:36.096483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:50.224 [2024-05-14 03:10:36.096502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:23:50.224 [2024-05-14 03:10:36.096517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.224 [2024-05-14 03:10:36.097823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.225 [2024-05-14 03:10:36.097872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:50.225 [2024-05-14 03:10:36.097891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:23:50.225 [2024-05-14 03:10:36.097925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.225 [2024-05-14 03:10:36.099183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.225 [2024-05-14 03:10:36.099229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:50.225 [2024-05-14 03:10:36.099248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:23:50.225 [2024-05-14 03:10:36.099262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.225 [2024-05-14 03:10:36.099310] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:50.225 [2024-05-14 03:10:36.099338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:50.225 [2024-05-14 03:10:36.099357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:23:50.225 [2024-05-14 03:10:36.099373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.099984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.100984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.101004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.101027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.101052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.101080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.101101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:50.225 [2024-05-14 03:10:36.101117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:50.226 [2024-05-14 03:10:36.101455] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:50.226 [2024-05-14 03:10:36.101500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21b9fe78-b278-40de-b610-6cc3012e276f 00:23:50.226 [2024-05-14 03:10:36.101517] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:23:50.226 [2024-05-14 03:10:36.101531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136640 00:23:50.226 [2024-05-14 03:10:36.101546] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134656 00:23:50.226 [2024-05-14 03:10:36.101571] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:23:50.226 [2024-05-14 03:10:36.101596] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:50.226 [2024-05-14 03:10:36.101628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:50.226 [2024-05-14 03:10:36.101655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:50.226 [2024-05-14 03:10:36.101669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:50.226 [2024-05-14 03:10:36.101690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:50.226 [2024-05-14 03:10:36.101713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.226 [2024-05-14 03:10:36.101730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:50.226 [2024-05-14 03:10:36.101750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:23:50.226 [2024-05-14 03:10:36.101797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.103360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.226 [2024-05-14 03:10:36.103401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:50.226 [2024-05-14 03:10:36.103420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:23:50.226 [2024-05-14 03:10:36.103445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.103536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.226 [2024-05-14 03:10:36.103571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:50.226 [2024-05-14 03:10:36.103594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:50.226 [2024-05-14 03:10:36.103636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.109612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.109662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:50.226 [2024-05-14 03:10:36.109691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.109707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.109780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.109800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:50.226 [2024-05-14 03:10:36.109816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.109831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.109957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.110000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:50.226 [2024-05-14 03:10:36.110028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.110056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.110094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.110125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:50.226 [2024-05-14 03:10:36.110179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.110195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.119616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.119675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:50.226 [2024-05-14 03:10:36.119708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.119724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.124280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.124333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.226 [2024-05-14 03:10:36.124355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.124372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.124448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.124489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.226 [2024-05-14 03:10:36.124506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.124521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.124595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.124626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.226 [2024-05-14 03:10:36.124651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.124673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.124823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.124877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.226 [2024-05-14 03:10:36.124902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.124917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.124998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.125035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:50.226 [2024-05-14 03:10:36.125058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.125073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.125171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.125206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.226 [2024-05-14 03:10:36.125231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.125287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.125373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.226 [2024-05-14 03:10:36.125400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.226 [2024-05-14 03:10:36.125423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.226 [2024-05-14 03:10:36.125449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.226 [2024-05-14 03:10:36.125677] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.946 ms, result 0 00:23:50.485 00:23:50.485 00:23:50.485 03:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:52.386 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:52.386 03:10:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:52.386 [2024-05-14 03:10:38.225475] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:23:52.386 [2024-05-14 03:10:38.225618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94715 ] 00:23:52.386 [2024-05-14 03:10:38.359646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:52.386 [2024-05-14 03:10:38.383373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.645 [2024-05-14 03:10:38.426897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.645 [2024-05-14 03:10:38.521089] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.645 [2024-05-14 03:10:38.521214] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.645 [2024-05-14 03:10:38.669884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.645 [2024-05-14 03:10:38.669959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:52.645 [2024-05-14 03:10:38.669991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:52.645 [2024-05-14 03:10:38.670002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.645 [2024-05-14 03:10:38.670072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.645 [2024-05-14 03:10:38.670091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.645 [2024-05-14 03:10:38.670102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:52.645 [2024-05-14 03:10:38.670112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.645 [2024-05-14 03:10:38.670152] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:52.645 [2024-05-14 03:10:38.670542] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:52.645 [2024-05-14 03:10:38.670593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.645 [2024-05-14 03:10:38.670608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.645 [2024-05-14 03:10:38.670623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:23:52.645 [2024-05-14 03:10:38.670657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.671782] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:52.906 [2024-05-14 03:10:38.674007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.674046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:52.906 [2024-05-14 03:10:38.674059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:23:52.906 [2024-05-14 03:10:38.674090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.674192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.674227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:52.906 [2024-05-14 03:10:38.674242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:52.906 [2024-05-14 03:10:38.674252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.678507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.678553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.906 [2024-05-14 03:10:38.678572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.177 ms 00:23:52.906 [2024-05-14 03:10:38.678582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.678687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.678703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.906 [2024-05-14 03:10:38.678720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:52.906 [2024-05-14 03:10:38.678737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.678798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.678813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:52.906 [2024-05-14 03:10:38.678824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.906 [2024-05-14 03:10:38.678837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.678866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:52.906 [2024-05-14 03:10:38.680028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.680067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.906 [2024-05-14 03:10:38.680082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:23:52.906 [2024-05-14 03:10:38.680092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.680124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.906 [2024-05-14 03:10:38.680190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:52.906 [2024-05-14 03:10:38.680234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.906 [2024-05-14 03:10:38.680265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.906 [2024-05-14 03:10:38.680306] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:52.906 [2024-05-14 03:10:38.680365] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:52.906 [2024-05-14 03:10:38.680415] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:52.906 [2024-05-14 03:10:38.680435] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:52.906 [2024-05-14 03:10:38.680579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:52.906 [2024-05-14 03:10:38.680625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:52.906 [2024-05-14 03:10:38.680642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:52.906 [2024-05-14 03:10:38.680655] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:52.907 [2024-05-14 03:10:38.680666] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:52.907 [2024-05-14 03:10:38.680701] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:52.907 [2024-05-14 03:10:38.680712] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:52.907 [2024-05-14 03:10:38.680738] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:52.907 [2024-05-14 03:10:38.680753] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:52.907 [2024-05-14 03:10:38.680770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.680780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:52.907 [2024-05-14 03:10:38.680790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:23:52.907 [2024-05-14 03:10:38.680810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.907 [2024-05-14 03:10:38.680904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.680921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:52.907 [2024-05-14 03:10:38.680931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:52.907 [2024-05-14 03:10:38.680950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.907 [2024-05-14 03:10:38.681028] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:52.907 [2024-05-14 03:10:38.681052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:52.907 [2024-05-14 03:10:38.681068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:52.907 [2024-05-14 03:10:38.681113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:52.907 [2024-05-14 03:10:38.681140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.907 [2024-05-14 03:10:38.681159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:52.907 [2024-05-14 03:10:38.681179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:52.907 [2024-05-14 03:10:38.681190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.907 [2024-05-14 03:10:38.681199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:52.907 [2024-05-14 03:10:38.681207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:52.907 [2024-05-14 03:10:38.681233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:52.907 [2024-05-14 03:10:38.681254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:52.907 [2024-05-14 03:10:38.681263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:52.907 [2024-05-14 03:10:38.681279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:52.907 [2024-05-14 03:10:38.681289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:52.907 [2024-05-14 03:10:38.681336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:52.907 [2024-05-14 03:10:38.681377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:52.907 [2024-05-14 03:10:38.681442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:52.907 [2024-05-14 03:10:38.681483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:52.907 [2024-05-14 03:10:38.681508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.907 [2024-05-14 03:10:38.681528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:52.907 [2024-05-14 03:10:38.681542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:52.907 [2024-05-14 03:10:38.681559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.907 [2024-05-14 03:10:38.681571] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:52.907 [2024-05-14 03:10:38.681586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:52.907 [2024-05-14 03:10:38.681595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.907 [2024-05-14 03:10:38.681617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:52.907 [2024-05-14 03:10:38.681626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:52.907 [2024-05-14 03:10:38.681634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:52.907 [2024-05-14 03:10:38.681643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:52.907 [2024-05-14 03:10:38.681651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:52.907 [2024-05-14 03:10:38.681660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:52.907 [2024-05-14 03:10:38.681671] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:52.907 [2024-05-14 03:10:38.681689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.907 [2024-05-14 03:10:38.681712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:52.907 [2024-05-14 03:10:38.681725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:52.907 [2024-05-14 03:10:38.681742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:52.907 [2024-05-14 03:10:38.681757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:52.907 [2024-05-14 03:10:38.681767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:52.907 [2024-05-14 03:10:38.681776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:52.907 [2024-05-14 03:10:38.681785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:52.907 [2024-05-14 03:10:38.681798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:52.907 [2024-05-14 03:10:38.681808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:52.907 [2024-05-14 03:10:38.681817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:52.907 [2024-05-14 03:10:38.681828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:52.907 [2024-05-14 03:10:38.681844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:52.907 [2024-05-14 03:10:38.681861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:52.907 [2024-05-14 03:10:38.681872] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:52.907 [2024-05-14 03:10:38.681882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.907 [2024-05-14 03:10:38.681893] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:52.907 [2024-05-14 03:10:38.681903] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:52.907 [2024-05-14 03:10:38.681912] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:52.907 [2024-05-14 03:10:38.681921] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:52.907 [2024-05-14 03:10:38.681931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.681946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:52.907 [2024-05-14 03:10:38.681964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:23:52.907 [2024-05-14 03:10:38.681982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.907 [2024-05-14 03:10:38.687392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.687426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.907 [2024-05-14 03:10:38.687451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:23:52.907 [2024-05-14 03:10:38.687469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.907 [2024-05-14 03:10:38.687560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.687582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:52.907 [2024-05-14 03:10:38.687593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:52.907 [2024-05-14 03:10:38.687605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.907 [2024-05-14 03:10:38.702121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.702183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.907 [2024-05-14 03:10:38.702200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.460 ms 00:23:52.907 [2024-05-14 03:10:38.702210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.907 [2024-05-14 03:10:38.702254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.907 [2024-05-14 03:10:38.702269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.907 [2024-05-14 03:10:38.702280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:52.908 [2024-05-14 03:10:38.702295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.702692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.702716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.908 [2024-05-14 03:10:38.702728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:23:52.908 [2024-05-14 03:10:38.702738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.702873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.702895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.908 [2024-05-14 03:10:38.702907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:52.908 [2024-05-14 03:10:38.702916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.707953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.707992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.908 [2024-05-14 03:10:38.708005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.008 ms 00:23:52.908 [2024-05-14 03:10:38.708014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.710147] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:52.908 [2024-05-14 03:10:38.710207] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:52.908 [2024-05-14 03:10:38.710226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.710237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:52.908 [2024-05-14 03:10:38.710248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:23:52.908 [2024-05-14 03:10:38.710257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.722909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.722946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:52.908 [2024-05-14 03:10:38.722961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.612 ms 00:23:52.908 [2024-05-14 03:10:38.722985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.724786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.724822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:52.908 [2024-05-14 03:10:38.724840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:23:52.908 [2024-05-14 03:10:38.724850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.726341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.726376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:52.908 [2024-05-14 03:10:38.726389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:23:52.908 [2024-05-14 03:10:38.726399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.726645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.726672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:52.908 [2024-05-14 03:10:38.726683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:23:52.908 [2024-05-14 03:10:38.726693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.742477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.742531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:52.908 [2024-05-14 03:10:38.742548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.759 ms 00:23:52.908 [2024-05-14 03:10:38.742570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.749199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:52.908 [2024-05-14 03:10:38.751083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.751115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:52.908 [2024-05-14 03:10:38.751129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.455 ms 00:23:52.908 [2024-05-14 03:10:38.751188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.751288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.751315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:52.908 [2024-05-14 03:10:38.751327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:52.908 [2024-05-14 03:10:38.751337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.751975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.752006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:52.908 [2024-05-14 03:10:38.752024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:23:52.908 [2024-05-14 03:10:38.752035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.753794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.753852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:52.908 [2024-05-14 03:10:38.753865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:23:52.908 [2024-05-14 03:10:38.753875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.753908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.753921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:52.908 [2024-05-14 03:10:38.753931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:52.908 [2024-05-14 03:10:38.753945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.753998] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:52.908 [2024-05-14 03:10:38.754015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.754024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:52.908 [2024-05-14 03:10:38.754033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:52.908 [2024-05-14 03:10:38.754042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.757120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.757168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:52.908 [2024-05-14 03:10:38.757200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:23:52.908 [2024-05-14 03:10:38.757218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.757296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.908 [2024-05-14 03:10:38.757311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:52.908 [2024-05-14 03:10:38.757336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:52.908 [2024-05-14 03:10:38.757347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.908 [2024-05-14 03:10:38.758633] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.201 ms, result 0 00:24:37.909  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 161/1024 [MB] (22 MBps) Copying: 184/1024 [MB] (22 MBps) Copying: 206/1024 [MB] (22 MBps) Copying: 229/1024 [MB] (22 MBps) Copying: 252/1024 [MB] (22 MBps) Copying: 275/1024 [MB] (23 MBps) Copying: 298/1024 [MB] (22 MBps) Copying: 320/1024 [MB] (22 MBps) Copying: 343/1024 [MB] (22 MBps) Copying: 366/1024 [MB] (22 MBps) Copying: 389/1024 [MB] (23 MBps) Copying: 411/1024 [MB] (22 MBps) Copying: 434/1024 [MB] (22 MBps) Copying: 457/1024 [MB] (22 MBps) Copying: 480/1024 [MB] (22 MBps) Copying: 502/1024 [MB] (22 MBps) Copying: 525/1024 [MB] (22 MBps) Copying: 548/1024 [MB] (22 MBps) Copying: 571/1024 [MB] (22 MBps) Copying: 594/1024 [MB] (23 MBps) Copying: 618/1024 [MB] (23 MBps) Copying: 641/1024 [MB] (23 MBps) Copying: 664/1024 [MB] (22 MBps) Copying: 687/1024 [MB] (23 MBps) Copying: 710/1024 [MB] (22 MBps) Copying: 733/1024 [MB] (23 MBps) Copying: 756/1024 [MB] (23 MBps) Copying: 779/1024 [MB] (23 MBps) Copying: 802/1024 [MB] (22 MBps) Copying: 825/1024 [MB] (22 MBps) Copying: 848/1024 [MB] (22 MBps) Copying: 870/1024 [MB] (22 MBps) Copying: 892/1024 [MB] (21 MBps) Copying: 915/1024 [MB] (22 MBps) Copying: 937/1024 [MB] (22 MBps) Copying: 960/1024 [MB] (22 MBps) Copying: 983/1024 [MB] (23 MBps) Copying: 1007/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-05-14 03:11:23.754440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.754570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:37.909 [2024-05-14 03:11:23.754598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:37.909 [2024-05-14 03:11:23.754613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.754651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:37.909 [2024-05-14 03:11:23.755165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.755187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:37.909 [2024-05-14 03:11:23.755280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:24:37.909 [2024-05-14 03:11:23.755298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.755657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.755692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:37.909 [2024-05-14 03:11:23.755714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:24:37.909 [2024-05-14 03:11:23.755729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.759287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.759318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:37.909 [2024-05-14 03:11:23.759342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:24:37.909 [2024-05-14 03:11:23.759352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.765037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.765066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:37.909 [2024-05-14 03:11:23.765088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.661 ms 00:24:37.909 [2024-05-14 03:11:23.765104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.766552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.766604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:37.909 [2024-05-14 03:11:23.766634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:24:37.909 [2024-05-14 03:11:23.766643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.769773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.769808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:37.909 [2024-05-14 03:11:23.769821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:24:37.909 [2024-05-14 03:11:23.769838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.773559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.773597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:37.909 [2024-05-14 03:11:23.773611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:24:37.909 [2024-05-14 03:11:23.773621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.775377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.775424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:37.909 [2024-05-14 03:11:23.775452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:24:37.909 [2024-05-14 03:11:23.775461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.776959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.777008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:37.909 [2024-05-14 03:11:23.777037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:24:37.909 [2024-05-14 03:11:23.777046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.778183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.909 [2024-05-14 03:11:23.778258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:37.909 [2024-05-14 03:11:23.778285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:24:37.909 [2024-05-14 03:11:23.778295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.909 [2024-05-14 03:11:23.779262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.910 [2024-05-14 03:11:23.779295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:37.910 [2024-05-14 03:11:23.779322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:24:37.910 [2024-05-14 03:11:23.779346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.910 [2024-05-14 03:11:23.779376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:37.910 [2024-05-14 03:11:23.779394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:37.910 [2024-05-14 03:11:23.779405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:24:37.910 [2024-05-14 03:11:23.779414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.779999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:37.910 [2024-05-14 03:11:23.780266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:37.911 [2024-05-14 03:11:23.780410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:37.911 [2024-05-14 03:11:23.780419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21b9fe78-b278-40de-b610-6cc3012e276f 00:24:37.911 [2024-05-14 03:11:23.780430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:24:37.911 [2024-05-14 03:11:23.780439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:37.911 [2024-05-14 03:11:23.780455] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:37.911 [2024-05-14 03:11:23.780464] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:37.911 [2024-05-14 03:11:23.780486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:37.911 [2024-05-14 03:11:23.780496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:37.911 [2024-05-14 03:11:23.780520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:37.911 [2024-05-14 03:11:23.780529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:37.911 [2024-05-14 03:11:23.780537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:37.911 [2024-05-14 03:11:23.780546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.911 [2024-05-14 03:11:23.780556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:37.911 [2024-05-14 03:11:23.780566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:24:37.911 [2024-05-14 03:11:23.780591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.781786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.911 [2024-05-14 03:11:23.781809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:37.911 [2024-05-14 03:11:23.781821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:24:37.911 [2024-05-14 03:11:23.781830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.781879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.911 [2024-05-14 03:11:23.781890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:37.911 [2024-05-14 03:11:23.781910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:37.911 [2024-05-14 03:11:23.781919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.786472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.786621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.911 [2024-05-14 03:11:23.786741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.786788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.786923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.786975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.911 [2024-05-14 03:11:23.787007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.787039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.787217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.787324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.911 [2024-05-14 03:11:23.787413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.787501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.787571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.787622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.911 [2024-05-14 03:11:23.787718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.787814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.794908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.795092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.911 [2024-05-14 03:11:23.795246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.795354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.798647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.798782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.911 [2024-05-14 03:11:23.798805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.798816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.798850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.798861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.911 [2024-05-14 03:11:23.798871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.798880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.798934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.798947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.911 [2024-05-14 03:11:23.798957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.798966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.799042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.799063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.911 [2024-05-14 03:11:23.799074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.799083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.799122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.799173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:37.911 [2024-05-14 03:11:23.799186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.799196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.799237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.799250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.911 [2024-05-14 03:11:23.799274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.799284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.799329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.911 [2024-05-14 03:11:23.799343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.911 [2024-05-14 03:11:23.799353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.911 [2024-05-14 03:11:23.799363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.911 [2024-05-14 03:11:23.799481] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.022 ms, result 0 00:24:38.171 00:24:38.171 00:24:38.171 03:11:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:40.077 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:40.077 03:11:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:40.077 03:11:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:40.077 03:11:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:40.077 03:11:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:40.077 03:11:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92838 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 92838 ']' 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 92838 00:24:40.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92838) - No such process 00:24:40.077 Process with pid 92838 is not found 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 92838 is not found' 00:24:40.077 03:11:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:40.336 Remove shared memory files 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:40.336 ************************************ 00:24:40.336 END TEST ftl_dirty_shutdown 00:24:40.336 ************************************ 00:24:40.336 00:24:40.336 real 3m49.516s 00:24:40.336 user 4m25.610s 00:24:40.336 sys 0m35.190s 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:40.336 03:11:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:40.595 03:11:26 ftl -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:40.595 03:11:26 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:40.595 03:11:26 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:40.595 03:11:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:40.595 ************************************ 00:24:40.595 START TEST ftl_upgrade_shutdown 00:24:40.595 ************************************ 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:40.595 * Looking for test storage... 00:24:40.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.595 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95259 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95259 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95259 ']' 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.596 03:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:40.596 [2024-05-14 03:11:26.620786] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:24:40.596 [2024-05-14 03:11:26.620978] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95259 ] 00:24:40.855 [2024-05-14 03:11:26.770502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:40.855 [2024-05-14 03:11:26.794885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.855 [2024-05-14 03:11:26.838556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:41.800 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:42.058 03:11:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:42.317 { 00:24:42.317 "name": "basen1", 00:24:42.317 "aliases": [ 00:24:42.317 "df956f6a-09f8-4e66-b465-eac6f64153b1" 00:24:42.317 ], 00:24:42.317 "product_name": "NVMe disk", 00:24:42.317 "block_size": 4096, 00:24:42.317 "num_blocks": 1310720, 00:24:42.317 "uuid": "df956f6a-09f8-4e66-b465-eac6f64153b1", 00:24:42.317 "assigned_rate_limits": { 00:24:42.317 "rw_ios_per_sec": 0, 00:24:42.317 "rw_mbytes_per_sec": 0, 00:24:42.317 "r_mbytes_per_sec": 0, 00:24:42.317 "w_mbytes_per_sec": 0 00:24:42.317 }, 00:24:42.317 "claimed": true, 00:24:42.317 "claim_type": "read_many_write_one", 00:24:42.317 "zoned": false, 00:24:42.317 "supported_io_types": { 00:24:42.317 "read": true, 00:24:42.317 "write": true, 00:24:42.317 "unmap": true, 00:24:42.317 "write_zeroes": true, 00:24:42.317 "flush": true, 00:24:42.317 "reset": true, 00:24:42.317 "compare": true, 00:24:42.317 "compare_and_write": false, 00:24:42.317 "abort": true, 00:24:42.317 "nvme_admin": true, 00:24:42.317 "nvme_io": true 00:24:42.317 }, 00:24:42.317 "driver_specific": { 00:24:42.317 "nvme": [ 00:24:42.317 { 00:24:42.317 "pci_address": "0000:00:11.0", 00:24:42.317 "trid": { 00:24:42.317 "trtype": "PCIe", 00:24:42.317 "traddr": "0000:00:11.0" 00:24:42.317 }, 00:24:42.317 "ctrlr_data": { 00:24:42.317 "cntlid": 0, 00:24:42.317 "vendor_id": "0x1b36", 00:24:42.317 "model_number": "QEMU NVMe Ctrl", 00:24:42.317 "serial_number": "12341", 00:24:42.317 "firmware_revision": "8.0.0", 00:24:42.317 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:42.317 "oacs": { 00:24:42.317 "security": 0, 00:24:42.317 "format": 1, 00:24:42.317 "firmware": 0, 00:24:42.317 "ns_manage": 1 00:24:42.317 }, 00:24:42.317 "multi_ctrlr": false, 00:24:42.317 "ana_reporting": false 00:24:42.317 }, 00:24:42.317 "vs": { 00:24:42.317 "nvme_version": "1.4" 00:24:42.317 }, 00:24:42.317 "ns_data": { 00:24:42.317 "id": 1, 00:24:42.317 "can_share": false 00:24:42.317 } 00:24:42.317 } 00:24:42.317 ], 00:24:42.317 "mp_policy": "active_passive" 00:24:42.317 } 00:24:42.317 } 00:24:42.317 ]' 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:42.317 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:42.575 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=125bc24c-6605-4a42-8f54-2f1ac6cea61e 00:24:42.575 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:42.575 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 125bc24c-6605-4a42-8f54-2f1ac6cea61e 00:24:42.833 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:43.091 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=12adeba0-ef7d-43e2-83cc-1c6e94754060 00:24:43.091 03:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 12adeba0-ef7d-43e2-83cc-1c6e94754060 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6c4b5137-8ef0-46ba-9d24-72d5b040a33a 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6c4b5137-8ef0-46ba-9d24-72d5b040a33a ]] 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6c4b5137-8ef0-46ba-9d24-72d5b040a33a 5120 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6c4b5137-8ef0-46ba-9d24-72d5b040a33a 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6c4b5137-8ef0-46ba-9d24-72d5b040a33a 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=6c4b5137-8ef0-46ba-9d24-72d5b040a33a 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:43.350 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c4b5137-8ef0-46ba-9d24-72d5b040a33a 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:43.608 { 00:24:43.608 "name": "6c4b5137-8ef0-46ba-9d24-72d5b040a33a", 00:24:43.608 "aliases": [ 00:24:43.608 "lvs/basen1p0" 00:24:43.608 ], 00:24:43.608 "product_name": "Logical Volume", 00:24:43.608 "block_size": 4096, 00:24:43.608 "num_blocks": 5242880, 00:24:43.608 "uuid": "6c4b5137-8ef0-46ba-9d24-72d5b040a33a", 00:24:43.608 "assigned_rate_limits": { 00:24:43.608 "rw_ios_per_sec": 0, 00:24:43.608 "rw_mbytes_per_sec": 0, 00:24:43.608 "r_mbytes_per_sec": 0, 00:24:43.608 "w_mbytes_per_sec": 0 00:24:43.608 }, 00:24:43.608 "claimed": false, 00:24:43.608 "zoned": false, 00:24:43.608 "supported_io_types": { 00:24:43.608 "read": true, 00:24:43.608 "write": true, 00:24:43.608 "unmap": true, 00:24:43.608 "write_zeroes": true, 00:24:43.608 "flush": false, 00:24:43.608 "reset": true, 00:24:43.608 "compare": false, 00:24:43.608 "compare_and_write": false, 00:24:43.608 "abort": false, 00:24:43.608 "nvme_admin": false, 00:24:43.608 "nvme_io": false 00:24:43.608 }, 00:24:43.608 "driver_specific": { 00:24:43.608 "lvol": { 00:24:43.608 "lvol_store_uuid": "12adeba0-ef7d-43e2-83cc-1c6e94754060", 00:24:43.608 "base_bdev": "basen1", 00:24:43.608 "thin_provision": true, 00:24:43.608 "num_allocated_clusters": 0, 00:24:43.608 "snapshot": false, 00:24:43.608 "clone": false, 00:24:43.608 "esnap_clone": false 00:24:43.608 } 00:24:43.608 } 00:24:43.608 } 00:24:43.608 ]' 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:43.608 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:43.866 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:43.866 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:43.866 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:44.125 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:44.125 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:44.125 03:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6c4b5137-8ef0-46ba-9d24-72d5b040a33a -c cachen1p0 --l2p_dram_limit 2 00:24:44.385 [2024-05-14 03:11:30.162082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.162164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:44.385 [2024-05-14 03:11:30.162193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:44.385 [2024-05-14 03:11:30.162207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.162276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.162294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:44.385 [2024-05-14 03:11:30.162306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:24:44.385 [2024-05-14 03:11:30.162320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.162355] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:44.385 [2024-05-14 03:11:30.162759] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:44.385 [2024-05-14 03:11:30.162799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.162815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:44.385 [2024-05-14 03:11:30.162830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:24:44.385 [2024-05-14 03:11:30.162843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.163053] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 75efb898-7f58-4054-9e83-434893bc1140 00:24:44.385 [2024-05-14 03:11:30.164036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.164082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:44.385 [2024-05-14 03:11:30.164100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:24:44.385 [2024-05-14 03:11:30.164111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.168241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.168292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:44.385 [2024-05-14 03:11:30.168312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.059 ms 00:24:44.385 [2024-05-14 03:11:30.168324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.168392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.168410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:44.385 [2024-05-14 03:11:30.168424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:24:44.385 [2024-05-14 03:11:30.168437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.168567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.168584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:44.385 [2024-05-14 03:11:30.168607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:24:44.385 [2024-05-14 03:11:30.168619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.168659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:44.385 [2024-05-14 03:11:30.170001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.170068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:44.385 [2024-05-14 03:11:30.170086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.353 ms 00:24:44.385 [2024-05-14 03:11:30.170099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.170130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.385 [2024-05-14 03:11:30.170175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:44.385 [2024-05-14 03:11:30.170191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:44.385 [2024-05-14 03:11:30.170206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.385 [2024-05-14 03:11:30.170241] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:44.385 [2024-05-14 03:11:30.170379] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:24:44.385 [2024-05-14 03:11:30.170403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:44.385 [2024-05-14 03:11:30.170419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:24:44.386 [2024-05-14 03:11:30.170433] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:44.386 [2024-05-14 03:11:30.170448] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:44.386 [2024-05-14 03:11:30.170467] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:44.386 [2024-05-14 03:11:30.170483] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:44.386 [2024-05-14 03:11:30.170493] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:24:44.386 [2024-05-14 03:11:30.170504] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:24:44.386 [2024-05-14 03:11:30.170533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.170544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:44.386 [2024-05-14 03:11:30.170556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:24:44.386 [2024-05-14 03:11:30.170568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.386 [2024-05-14 03:11:30.170642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.170668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:44.386 [2024-05-14 03:11:30.170679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:24:44.386 [2024-05-14 03:11:30.170702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.386 [2024-05-14 03:11:30.170774] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:44.386 [2024-05-14 03:11:30.170794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:44.386 [2024-05-14 03:11:30.170805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:44.386 [2024-05-14 03:11:30.170818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:44.386 [2024-05-14 03:11:30.170840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:44.386 [2024-05-14 03:11:30.170864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:44.386 [2024-05-14 03:11:30.170874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:44.386 [2024-05-14 03:11:30.170885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:44.386 [2024-05-14 03:11:30.170906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:44.386 [2024-05-14 03:11:30.170915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:44.386 [2024-05-14 03:11:30.170939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:44.386 [2024-05-14 03:11:30.170972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:24:44.386 [2024-05-14 03:11:30.170982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.170993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:24:44.386 [2024-05-14 03:11:30.171002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:24:44.386 [2024-05-14 03:11:30.171014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:44.386 [2024-05-14 03:11:30.171035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:44.386 [2024-05-14 03:11:30.171045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:44.386 [2024-05-14 03:11:30.171065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:24:44.386 [2024-05-14 03:11:30.171077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171086] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:44.386 [2024-05-14 03:11:30.171099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:44.386 [2024-05-14 03:11:30.171109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:44.386 [2024-05-14 03:11:30.171129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:24:44.386 [2024-05-14 03:11:30.171142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:44.386 [2024-05-14 03:11:30.171180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:44.386 [2024-05-14 03:11:30.171191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.171202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:44.386 [2024-05-14 03:11:30.171212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.171255] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:44.386 [2024-05-14 03:11:30.171268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:44.386 [2024-05-14 03:11:30.171278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.386 [2024-05-14 03:11:30.171301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:44.386 [2024-05-14 03:11:30.171314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:44.386 [2024-05-14 03:11:30.171324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:44.386 [2024-05-14 03:11:30.171335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:44.386 [2024-05-14 03:11:30.171345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:44.386 [2024-05-14 03:11:30.171358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:44.386 [2024-05-14 03:11:30.171370] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:44.386 [2024-05-14 03:11:30.171385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:44.386 [2024-05-14 03:11:30.171409] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:24:44.386 [2024-05-14 03:11:30.171441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:24:44.386 [2024-05-14 03:11:30.171453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:24:44.386 [2024-05-14 03:11:30.171464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:24:44.386 [2024-05-14 03:11:30.171475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171511] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171523] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:24:44.386 [2024-05-14 03:11:30.171534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:24:44.386 [2024-05-14 03:11:30.171545] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:44.386 [2024-05-14 03:11:30.171557] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171572] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.386 [2024-05-14 03:11:30.171583] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:44.386 [2024-05-14 03:11:30.171594] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:44.386 [2024-05-14 03:11:30.171605] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:44.386 [2024-05-14 03:11:30.171618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.171638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:44.386 [2024-05-14 03:11:30.171658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.880 ms 00:24:44.386 [2024-05-14 03:11:30.171670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.386 [2024-05-14 03:11:30.177129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.177194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:44.386 [2024-05-14 03:11:30.177215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.397 ms 00:24:44.386 [2024-05-14 03:11:30.177226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.386 [2024-05-14 03:11:30.177270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.177286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:44.386 [2024-05-14 03:11:30.177300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:44.386 [2024-05-14 03:11:30.177310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.386 [2024-05-14 03:11:30.185437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.185497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:44.386 [2024-05-14 03:11:30.185532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.079 ms 00:24:44.386 [2024-05-14 03:11:30.185544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.386 [2024-05-14 03:11:30.185601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.386 [2024-05-14 03:11:30.185614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:44.386 [2024-05-14 03:11:30.185637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:44.387 [2024-05-14 03:11:30.185647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.387 [2024-05-14 03:11:30.186022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.387 [2024-05-14 03:11:30.186054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:44.387 [2024-05-14 03:11:30.186074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:24:44.387 [2024-05-14 03:11:30.186085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.387 [2024-05-14 03:11:30.186154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.387 [2024-05-14 03:11:30.186174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:44.387 [2024-05-14 03:11:30.186198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:24:44.387 [2024-05-14 03:11:30.186209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.387 [2024-05-14 03:11:30.191312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.387 [2024-05-14 03:11:30.191364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:44.387 [2024-05-14 03:11:30.191420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.069 ms 00:24:44.387 [2024-05-14 03:11:30.191433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.387 [2024-05-14 03:11:30.198770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:44.387 [2024-05-14 03:11:30.199660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.387 [2024-05-14 03:11:30.199739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:44.387 [2024-05-14 03:11:30.199769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.160 ms 00:24:44.387 [2024-05-14 03:11:30.199782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.387 [2024-05-14 03:11:30.212682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.387 [2024-05-14 03:11:30.212743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:44.387 [2024-05-14 03:11:30.212763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.872 ms 00:24:44.387 [2024-05-14 03:11:30.212777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.387 [2024-05-14 03:11:30.212819] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:24:44.387 [2024-05-14 03:11:30.212839] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:24:46.920 [2024-05-14 03:11:32.632045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.920 [2024-05-14 03:11:32.632140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:46.920 [2024-05-14 03:11:32.632172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2419.242 ms 00:24:46.920 [2024-05-14 03:11:32.632186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.920 [2024-05-14 03:11:32.632329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.920 [2024-05-14 03:11:32.632356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:46.920 [2024-05-14 03:11:32.632371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:24:46.920 [2024-05-14 03:11:32.632385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.635331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.635396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:46.921 [2024-05-14 03:11:32.635412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.918 ms 00:24:46.921 [2024-05-14 03:11:32.635427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.638488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.638543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:46.921 [2024-05-14 03:11:32.638574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.019 ms 00:24:46.921 [2024-05-14 03:11:32.638586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.638811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.638847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:46.921 [2024-05-14 03:11:32.638862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:24:46.921 [2024-05-14 03:11:32.638876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.660141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.660233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:46.921 [2024-05-14 03:11:32.660260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.219 ms 00:24:46.921 [2024-05-14 03:11:32.660283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.663914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.663964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:46.921 [2024-05-14 03:11:32.663997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.583 ms 00:24:46.921 [2024-05-14 03:11:32.664021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.665820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.665857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:24:46.921 [2024-05-14 03:11:32.665887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.757 ms 00:24:46.921 [2024-05-14 03:11:32.665898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.669570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.669626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:46.921 [2024-05-14 03:11:32.669642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.646 ms 00:24:46.921 [2024-05-14 03:11:32.669654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.669700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.669719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:46.921 [2024-05-14 03:11:32.669731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:46.921 [2024-05-14 03:11:32.669742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.669810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:46.921 [2024-05-14 03:11:32.669834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:46.921 [2024-05-14 03:11:32.669884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:24:46.921 [2024-05-14 03:11:32.669917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:46.921 [2024-05-14 03:11:32.671106] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2508.490 ms, result 0 00:24:46.921 { 00:24:46.921 "name": "ftl", 00:24:46.921 "uuid": "75efb898-7f58-4054-9e83-434893bc1140" 00:24:46.921 } 00:24:46.921 03:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:46.921 [2024-05-14 03:11:32.919770] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.921 03:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:47.180 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:47.439 [2024-05-14 03:11:33.344182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:47.439 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:47.717 [2024-05-14 03:11:33.604285] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:47.718 [2024-05-14 03:11:33.604696] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:47.718 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:47.983 Fill FTL, iteration 1 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:47.983 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95370 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95370 /var/tmp/spdk.tgt.sock 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95370 ']' 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:47.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:47.984 03:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:48.243 [2024-05-14 03:11:34.011670] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:24:48.243 [2024-05-14 03:11:34.011880] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95370 ] 00:24:48.243 [2024-05-14 03:11:34.146526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:48.243 [2024-05-14 03:11:34.170582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.243 [2024-05-14 03:11:34.211755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.179 03:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:49.179 03:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:49.179 03:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:49.179 ftln1 00:24:49.179 03:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:49.179 03:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95370 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 95370 ']' 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 95370 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95370 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:49.438 killing process with pid 95370 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95370' 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 95370 00:24:49.438 03:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 95370 00:24:49.696 03:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:49.696 03:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:49.696 [2024-05-14 03:11:35.687724] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:24:49.696 [2024-05-14 03:11:35.687853] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95395 ] 00:24:49.954 [2024-05-14 03:11:35.821072] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:49.954 [2024-05-14 03:11:35.839228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.954 [2024-05-14 03:11:35.873399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.089  Copying: 220/1024 [MB] (220 MBps) Copying: 434/1024 [MB] (214 MBps) Copying: 648/1024 [MB] (214 MBps) Copying: 864/1024 [MB] (216 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:24:55.089 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:55.089 Calculate MD5 checksum, iteration 1 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:55.089 03:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:55.089 [2024-05-14 03:11:41.077410] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:24:55.089 [2024-05-14 03:11:41.077556] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95458 ] 00:24:55.347 [2024-05-14 03:11:41.210206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:55.347 [2024-05-14 03:11:41.224142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.347 [2024-05-14 03:11:41.256103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.917  Copying: 467/1024 [MB] (467 MBps) Copying: 929/1024 [MB] (462 MBps) Copying: 1024/1024 [MB] (average 464 MBps) 00:24:57.917 00:24:57.917 03:11:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:57.917 03:11:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=667b8a39bc53d18ed2884e1888b872ac 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:59.814 Fill FTL, iteration 2 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:59.814 03:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:00.072 [2024-05-14 03:11:45.849102] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:00.072 [2024-05-14 03:11:45.849300] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95504 ] 00:25:00.072 [2024-05-14 03:11:45.996465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:00.072 [2024-05-14 03:11:46.019530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.072 [2024-05-14 03:11:46.060409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.529  Copying: 212/1024 [MB] (212 MBps) Copying: 421/1024 [MB] (209 MBps) Copying: 633/1024 [MB] (212 MBps) Copying: 841/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:25:05.529 00:25:05.529 Calculate MD5 checksum, iteration 2 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:05.529 03:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:05.529 [2024-05-14 03:11:51.383830] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:05.529 [2024-05-14 03:11:51.383987] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95569 ] 00:25:05.529 [2024-05-14 03:11:51.518473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:05.529 [2024-05-14 03:11:51.538142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.789 [2024-05-14 03:11:51.569789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.620  Copying: 469/1024 [MB] (469 MBps) Copying: 926/1024 [MB] (457 MBps) Copying: 1024/1024 [MB] (average 458 MBps) 00:25:08.620 00:25:08.620 03:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:08.620 03:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:10.523 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:10.523 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c7f65a782eaf1b1c20abf490754ae710 00:25:10.523 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:10.523 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:10.523 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:10.782 [2024-05-14 03:11:56.707020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.782 [2024-05-14 03:11:56.707079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:10.782 [2024-05-14 03:11:56.707124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:10.782 [2024-05-14 03:11:56.707188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.782 [2024-05-14 03:11:56.707222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.782 [2024-05-14 03:11:56.707236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:10.782 [2024-05-14 03:11:56.707247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:10.782 [2024-05-14 03:11:56.707257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.782 [2024-05-14 03:11:56.707280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.782 [2024-05-14 03:11:56.707292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:10.782 [2024-05-14 03:11:56.707303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:10.782 [2024-05-14 03:11:56.707312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.782 [2024-05-14 03:11:56.707415] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.364 ms, result 0 00:25:10.782 true 00:25:10.782 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:11.042 { 00:25:11.042 "name": "ftl", 00:25:11.042 "properties": [ 00:25:11.042 { 00:25:11.042 "name": "superblock_version", 00:25:11.042 "value": 5, 00:25:11.042 "read-only": true 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "name": "base_device", 00:25:11.042 "bands": [ 00:25:11.042 { 00:25:11.042 "id": 0, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 1, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 2, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 3, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 4, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 5, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 6, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 7, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 8, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 9, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 10, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 11, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 12, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 13, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 14, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 15, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 16, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 17, 00:25:11.042 "state": "FREE", 00:25:11.042 "validity": 0.0 00:25:11.042 } 00:25:11.042 ], 00:25:11.042 "read-only": true 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "name": "cache_device", 00:25:11.042 "type": "bdev", 00:25:11.042 "chunks": [ 00:25:11.042 { 00:25:11.042 "id": 0, 00:25:11.042 "state": "CLOSED", 00:25:11.042 "utilization": 1.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 1, 00:25:11.042 "state": "CLOSED", 00:25:11.042 "utilization": 1.0 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 2, 00:25:11.042 "state": "OPEN", 00:25:11.042 "utilization": 0.001953125 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "id": 3, 00:25:11.042 "state": "OPEN", 00:25:11.042 "utilization": 0.0 00:25:11.042 } 00:25:11.042 ], 00:25:11.042 "read-only": true 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "name": "verbose_mode", 00:25:11.042 "value": true, 00:25:11.042 "unit": "", 00:25:11.042 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:11.042 }, 00:25:11.042 { 00:25:11.042 "name": "prep_upgrade_on_shutdown", 00:25:11.042 "value": false, 00:25:11.042 "unit": "", 00:25:11.042 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:11.042 } 00:25:11.042 ] 00:25:11.042 } 00:25:11.042 03:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:11.302 [2024-05-14 03:11:57.135424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.302 [2024-05-14 03:11:57.135488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:11.302 [2024-05-14 03:11:57.135536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:11.302 [2024-05-14 03:11:57.135562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.302 [2024-05-14 03:11:57.135592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.302 [2024-05-14 03:11:57.135606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:11.302 [2024-05-14 03:11:57.135616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:11.302 [2024-05-14 03:11:57.135625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.302 [2024-05-14 03:11:57.135648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.302 [2024-05-14 03:11:57.135661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:11.302 [2024-05-14 03:11:57.135671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:11.302 [2024-05-14 03:11:57.135680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.302 [2024-05-14 03:11:57.135742] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.306 ms, result 0 00:25:11.302 true 00:25:11.302 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:11.302 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:11.302 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:11.561 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:11.561 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:11.561 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:11.561 [2024-05-14 03:11:57.579842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.561 [2024-05-14 03:11:57.579907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:11.561 [2024-05-14 03:11:57.579940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:11.561 [2024-05-14 03:11:57.579951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.561 [2024-05-14 03:11:57.579981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.561 [2024-05-14 03:11:57.579994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:11.561 [2024-05-14 03:11:57.580004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:11.561 [2024-05-14 03:11:57.580014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.561 [2024-05-14 03:11:57.580037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.561 [2024-05-14 03:11:57.580049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:11.561 [2024-05-14 03:11:57.580059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:11.561 [2024-05-14 03:11:57.580068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.561 [2024-05-14 03:11:57.580132] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.275 ms, result 0 00:25:11.561 true 00:25:11.820 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:11.820 { 00:25:11.820 "name": "ftl", 00:25:11.820 "properties": [ 00:25:11.820 { 00:25:11.820 "name": "superblock_version", 00:25:11.820 "value": 5, 00:25:11.820 "read-only": true 00:25:11.820 }, 00:25:11.820 { 00:25:11.820 "name": "base_device", 00:25:11.820 "bands": [ 00:25:11.820 { 00:25:11.820 "id": 0, 00:25:11.820 "state": "FREE", 00:25:11.820 "validity": 0.0 00:25:11.820 }, 00:25:11.820 { 00:25:11.820 "id": 1, 00:25:11.820 "state": "FREE", 00:25:11.820 "validity": 0.0 00:25:11.820 }, 00:25:11.820 { 00:25:11.820 "id": 2, 00:25:11.820 "state": "FREE", 00:25:11.820 "validity": 0.0 00:25:11.820 }, 00:25:11.821 { 00:25:11.821 "id": 3, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 4, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 5, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 6, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 7, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 8, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 9, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 10, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 11, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 12, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 13, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 14, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 15, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 16, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 17, 00:25:11.821 "state": "FREE", 00:25:11.821 "validity": 0.0 00:25:11.821 } 00:25:11.821 ], 00:25:11.821 "read-only": true 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "name": "cache_device", 00:25:11.821 "type": "bdev", 00:25:11.821 "chunks": [ 00:25:11.821 { 00:25:11.821 "id": 0, 00:25:11.821 "state": "CLOSED", 00:25:11.821 "utilization": 1.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 1, 00:25:11.821 "state": "CLOSED", 00:25:11.821 "utilization": 1.0 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 2, 00:25:11.821 "state": "OPEN", 00:25:11.821 "utilization": 0.001953125 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "id": 3, 00:25:11.821 "state": "OPEN", 00:25:11.821 "utilization": 0.0 00:25:11.821 } 00:25:11.821 ], 00:25:11.821 "read-only": true 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "name": "verbose_mode", 00:25:11.821 "value": true, 00:25:11.821 "unit": "", 00:25:11.821 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:11.821 }, 00:25:11.821 { 00:25:11.821 "name": "prep_upgrade_on_shutdown", 00:25:11.821 "value": true, 00:25:11.821 "unit": "", 00:25:11.821 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:11.821 } 00:25:11.821 ] 00:25:11.821 } 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95259 ]] 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95259 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 95259 ']' 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 95259 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95259 00:25:12.080 killing process with pid 95259 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95259' 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 95259 00:25:12.080 [2024-05-14 03:11:57.875066] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:12.080 03:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 95259 00:25:12.080 [2024-05-14 03:11:57.965085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:12.080 [2024-05-14 03:11:57.970550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.081 [2024-05-14 03:11:57.970592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:12.081 [2024-05-14 03:11:57.970626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:12.081 [2024-05-14 03:11:57.970647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.081 [2024-05-14 03:11:57.970674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:12.081 [2024-05-14 03:11:57.971132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.081 [2024-05-14 03:11:57.971147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:12.081 [2024-05-14 03:11:57.971159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.439 ms 00:25:12.081 [2024-05-14 03:11:57.971173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.205 [2024-05-14 03:12:05.931061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.205 [2024-05-14 03:12:05.931158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:20.205 [2024-05-14 03:12:05.931181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7959.900 ms 00:25:20.205 [2024-05-14 03:12:05.931192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.205 [2024-05-14 03:12:05.932449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.205 [2024-05-14 03:12:05.932488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:20.205 [2024-05-14 03:12:05.932505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.235 ms 00:25:20.205 [2024-05-14 03:12:05.932517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.205 [2024-05-14 03:12:05.933684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.205 [2024-05-14 03:12:05.933713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:25:20.205 [2024-05-14 03:12:05.933747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.110 ms 00:25:20.205 [2024-05-14 03:12:05.933757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.205 [2024-05-14 03:12:05.935281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.205 [2024-05-14 03:12:05.935319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:20.205 [2024-05-14 03:12:05.935333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.474 ms 00:25:20.206 [2024-05-14 03:12:05.935343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.937746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.937801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:20.206 [2024-05-14 03:12:05.937832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.366 ms 00:25:20.206 [2024-05-14 03:12:05.937842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.937919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.937936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:20.206 [2024-05-14 03:12:05.937947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:20.206 [2024-05-14 03:12:05.937958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.939387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.939435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:20.206 [2024-05-14 03:12:05.939449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.397 ms 00:25:20.206 [2024-05-14 03:12:05.939458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.940813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.940862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:20.206 [2024-05-14 03:12:05.940892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.322 ms 00:25:20.206 [2024-05-14 03:12:05.940901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.942074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.942122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:20.206 [2024-05-14 03:12:05.942145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.122 ms 00:25:20.206 [2024-05-14 03:12:05.942156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.943399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.943431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:20.206 [2024-05-14 03:12:05.943460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.185 ms 00:25:20.206 [2024-05-14 03:12:05.943469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.943503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:20.206 [2024-05-14 03:12:05.943522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:20.206 [2024-05-14 03:12:05.943535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:20.206 [2024-05-14 03:12:05.943545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:20.206 [2024-05-14 03:12:05.943556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:20.206 [2024-05-14 03:12:05.943719] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:20.206 [2024-05-14 03:12:05.943728] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 75efb898-7f58-4054-9e83-434893bc1140 00:25:20.206 [2024-05-14 03:12:05.943739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:20.206 [2024-05-14 03:12:05.943748] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:20.206 [2024-05-14 03:12:05.943757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:20.206 [2024-05-14 03:12:05.943772] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:20.206 [2024-05-14 03:12:05.943781] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:20.206 [2024-05-14 03:12:05.943791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:20.206 [2024-05-14 03:12:05.943800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:20.206 [2024-05-14 03:12:05.943808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:20.206 [2024-05-14 03:12:05.943816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:20.206 [2024-05-14 03:12:05.943827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.943836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:20.206 [2024-05-14 03:12:05.943847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:25:20.206 [2024-05-14 03:12:05.943866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.945098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.945123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:20.206 [2024-05-14 03:12:05.945134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.197 ms 00:25:20.206 [2024-05-14 03:12:05.945144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.945227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.206 [2024-05-14 03:12:05.945243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:20.206 [2024-05-14 03:12:05.945253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:25:20.206 [2024-05-14 03:12:05.945263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.950100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.950301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:20.206 [2024-05-14 03:12:05.950436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.950567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.950639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.950746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:20.206 [2024-05-14 03:12:05.950853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.950907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.951070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.951199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:20.206 [2024-05-14 03:12:05.951306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.951361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.951493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.951545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:20.206 [2024-05-14 03:12:05.951589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.951632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.959836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.960039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:20.206 [2024-05-14 03:12:05.960179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.960255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.963737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.963911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:20.206 [2024-05-14 03:12:05.964034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.964082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.964312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.964441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:20.206 [2024-05-14 03:12:05.964547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.964606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.964774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.964828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:20.206 [2024-05-14 03:12:05.964869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.964904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.965087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.965224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:20.206 [2024-05-14 03:12:05.965332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.965389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.965544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.965602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:20.206 [2024-05-14 03:12:05.965643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.965677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.206 [2024-05-14 03:12:05.965803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.206 [2024-05-14 03:12:05.965855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:20.206 [2024-05-14 03:12:05.965896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.206 [2024-05-14 03:12:05.965938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.207 [2024-05-14 03:12:05.966071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.207 [2024-05-14 03:12:05.966122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:20.207 [2024-05-14 03:12:05.966176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.207 [2024-05-14 03:12:05.966213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.207 [2024-05-14 03:12:05.966442] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7995.911 ms, result 0 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:22.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95742 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95742 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95742 ']' 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:22.739 03:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:22.998 [2024-05-14 03:12:08.835840] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:22.998 [2024-05-14 03:12:08.836014] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95742 ] 00:25:22.998 [2024-05-14 03:12:08.986619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:22.998 [2024-05-14 03:12:09.007165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.257 [2024-05-14 03:12:09.040843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.257 [2024-05-14 03:12:09.265476] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:23.257 [2024-05-14 03:12:09.265564] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:23.517 [2024-05-14 03:12:09.402055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.517 [2024-05-14 03:12:09.402100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:23.517 [2024-05-14 03:12:09.402126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:23.517 [2024-05-14 03:12:09.402181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.402303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.402340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:23.518 [2024-05-14 03:12:09.402360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:25:23.518 [2024-05-14 03:12:09.402389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.402441] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:23.518 [2024-05-14 03:12:09.402828] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:23.518 [2024-05-14 03:12:09.402880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.402910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:23.518 [2024-05-14 03:12:09.402930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.448 ms 00:25:23.518 [2024-05-14 03:12:09.402947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.404235] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:23.518 [2024-05-14 03:12:09.406381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.406433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:23.518 [2024-05-14 03:12:09.406457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.173 ms 00:25:23.518 [2024-05-14 03:12:09.406481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.406562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.406588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:23.518 [2024-05-14 03:12:09.406607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:23.518 [2024-05-14 03:12:09.406637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.410884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.410928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:23.518 [2024-05-14 03:12:09.410951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.145 ms 00:25:23.518 [2024-05-14 03:12:09.410976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.411070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.411096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:23.518 [2024-05-14 03:12:09.411127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:25:23.518 [2024-05-14 03:12:09.411169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.411257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.411280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:23.518 [2024-05-14 03:12:09.411298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:23.518 [2024-05-14 03:12:09.411332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.411383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:23.518 [2024-05-14 03:12:09.412783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.412820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:23.518 [2024-05-14 03:12:09.412842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.411 ms 00:25:23.518 [2024-05-14 03:12:09.412860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.412932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.412957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:23.518 [2024-05-14 03:12:09.412980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:23.518 [2024-05-14 03:12:09.412997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.413071] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:23.518 [2024-05-14 03:12:09.413109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:25:23.518 [2024-05-14 03:12:09.413199] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:23.518 [2024-05-14 03:12:09.413243] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:25:23.518 [2024-05-14 03:12:09.413350] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:23.518 [2024-05-14 03:12:09.413384] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:23.518 [2024-05-14 03:12:09.413405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:23.518 [2024-05-14 03:12:09.413427] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:23.518 [2024-05-14 03:12:09.413447] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:23.518 [2024-05-14 03:12:09.413478] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:23.518 [2024-05-14 03:12:09.413509] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:23.518 [2024-05-14 03:12:09.413553] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:23.518 [2024-05-14 03:12:09.413577] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:23.518 [2024-05-14 03:12:09.413597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.413616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:23.518 [2024-05-14 03:12:09.413633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:25:23.518 [2024-05-14 03:12:09.413658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.413758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.518 [2024-05-14 03:12:09.413791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:23.518 [2024-05-14 03:12:09.413811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:23.518 [2024-05-14 03:12:09.413828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.518 [2024-05-14 03:12:09.413952] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:23.518 [2024-05-14 03:12:09.413979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:23.518 [2024-05-14 03:12:09.413999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:23.518 [2024-05-14 03:12:09.414061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:23.518 [2024-05-14 03:12:09.414103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:23.518 [2024-05-14 03:12:09.414121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:23.518 [2024-05-14 03:12:09.414160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:23.518 [2024-05-14 03:12:09.414197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:23.518 [2024-05-14 03:12:09.414213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:23.518 [2024-05-14 03:12:09.414246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:23.518 [2024-05-14 03:12:09.414296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:23.518 [2024-05-14 03:12:09.414313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:23.518 [2024-05-14 03:12:09.414346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:23.518 [2024-05-14 03:12:09.414362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:23.518 [2024-05-14 03:12:09.414402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:23.518 [2024-05-14 03:12:09.414419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:23.518 [2024-05-14 03:12:09.414450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:23.518 [2024-05-14 03:12:09.414466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:23.518 [2024-05-14 03:12:09.414499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:23.518 [2024-05-14 03:12:09.414515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:23.518 [2024-05-14 03:12:09.414546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:23.518 [2024-05-14 03:12:09.414563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414580] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:23.518 [2024-05-14 03:12:09.414596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:23.518 [2024-05-14 03:12:09.414613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:23.518 [2024-05-14 03:12:09.414650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414685] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:23.518 [2024-05-14 03:12:09.414702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:23.518 [2024-05-14 03:12:09.414720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:23.518 [2024-05-14 03:12:09.414737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.518 [2024-05-14 03:12:09.414755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:23.518 [2024-05-14 03:12:09.414773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:23.518 [2024-05-14 03:12:09.414789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:23.519 [2024-05-14 03:12:09.414805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:23.519 [2024-05-14 03:12:09.414822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:23.519 [2024-05-14 03:12:09.414839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:23.519 [2024-05-14 03:12:09.414857] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:23.519 [2024-05-14 03:12:09.414878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.414920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:23.519 [2024-05-14 03:12:09.414940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.414963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.414982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:23.519 [2024-05-14 03:12:09.415000] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:23.519 [2024-05-14 03:12:09.415018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:23.519 [2024-05-14 03:12:09.415036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:23.519 [2024-05-14 03:12:09.415053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.415070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.415087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.415105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.415122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:23.519 [2024-05-14 03:12:09.415156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:23.519 [2024-05-14 03:12:09.415176] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:23.519 [2024-05-14 03:12:09.415195] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.415215] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.519 [2024-05-14 03:12:09.415233] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:23.519 [2024-05-14 03:12:09.415251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:23.519 [2024-05-14 03:12:09.415273] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:23.519 [2024-05-14 03:12:09.415296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.415314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:23.519 [2024-05-14 03:12:09.415331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.390 ms 00:25:23.519 [2024-05-14 03:12:09.415355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.421350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.421388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:23.519 [2024-05-14 03:12:09.421418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.890 ms 00:25:23.519 [2024-05-14 03:12:09.421439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.421491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.421527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:23.519 [2024-05-14 03:12:09.421546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:23.519 [2024-05-14 03:12:09.421562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.429639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.429683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:23.519 [2024-05-14 03:12:09.429707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.995 ms 00:25:23.519 [2024-05-14 03:12:09.429724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.429788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.429812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:23.519 [2024-05-14 03:12:09.429846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:23.519 [2024-05-14 03:12:09.429869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.430331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.430389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:23.519 [2024-05-14 03:12:09.430415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:25:23.519 [2024-05-14 03:12:09.430439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.430521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.430545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:23.519 [2024-05-14 03:12:09.430565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:23.519 [2024-05-14 03:12:09.430582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.435787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.435828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:23.519 [2024-05-14 03:12:09.435850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.155 ms 00:25:23.519 [2024-05-14 03:12:09.435882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.438254] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:23.519 [2024-05-14 03:12:09.438301] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:23.519 [2024-05-14 03:12:09.438325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.438349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:23.519 [2024-05-14 03:12:09.438368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.299 ms 00:25:23.519 [2024-05-14 03:12:09.438384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.442242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.442287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:23.519 [2024-05-14 03:12:09.442309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.801 ms 00:25:23.519 [2024-05-14 03:12:09.442327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.443926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.443966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:23.519 [2024-05-14 03:12:09.443994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.540 ms 00:25:23.519 [2024-05-14 03:12:09.444013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.445657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.445695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:23.519 [2024-05-14 03:12:09.445716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.590 ms 00:25:23.519 [2024-05-14 03:12:09.445733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.445998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.446034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:23.519 [2024-05-14 03:12:09.446057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:25:23.519 [2024-05-14 03:12:09.446106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.464494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.464583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:23.519 [2024-05-14 03:12:09.464622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.347 ms 00:25:23.519 [2024-05-14 03:12:09.464638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.471980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:23.519 [2024-05-14 03:12:09.472817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.472883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:23.519 [2024-05-14 03:12:09.472907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.102 ms 00:25:23.519 [2024-05-14 03:12:09.472926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.473043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.473070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:23.519 [2024-05-14 03:12:09.473090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:23.519 [2024-05-14 03:12:09.473106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.473239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.473283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:23.519 [2024-05-14 03:12:09.473311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:23.519 [2024-05-14 03:12:09.473329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.475265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.475322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:23.519 [2024-05-14 03:12:09.475345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.887 ms 00:25:23.519 [2024-05-14 03:12:09.475363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.475418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.475441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:23.519 [2024-05-14 03:12:09.475473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:23.519 [2024-05-14 03:12:09.475491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.519 [2024-05-14 03:12:09.475576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:23.519 [2024-05-14 03:12:09.475603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.519 [2024-05-14 03:12:09.475637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:23.519 [2024-05-14 03:12:09.475667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:25:23.519 [2024-05-14 03:12:09.475685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.520 [2024-05-14 03:12:09.479015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.520 [2024-05-14 03:12:09.479056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:23.520 [2024-05-14 03:12:09.479092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.286 ms 00:25:23.520 [2024-05-14 03:12:09.479111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.520 [2024-05-14 03:12:09.479254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.520 [2024-05-14 03:12:09.479282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:23.520 [2024-05-14 03:12:09.479303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:23.520 [2024-05-14 03:12:09.479320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.520 [2024-05-14 03:12:09.480753] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 78.061 ms, result 0 00:25:23.520 [2024-05-14 03:12:09.494362] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.520 [2024-05-14 03:12:09.510456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:23.520 [2024-05-14 03:12:09.518320] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:23.520 [2024-05-14 03:12:09.518679] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:23.779 03:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:23.779 03:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:23.779 03:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:23.779 03:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:23.779 03:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:24.039 [2024-05-14 03:12:09.842724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.039 [2024-05-14 03:12:09.842790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:24.039 [2024-05-14 03:12:09.842817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:24.039 [2024-05-14 03:12:09.842834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.039 [2024-05-14 03:12:09.842878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.039 [2024-05-14 03:12:09.842901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:24.039 [2024-05-14 03:12:09.842926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:24.039 [2024-05-14 03:12:09.842942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.039 [2024-05-14 03:12:09.842981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.039 [2024-05-14 03:12:09.843017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:24.039 [2024-05-14 03:12:09.843067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:24.039 [2024-05-14 03:12:09.843084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.039 [2024-05-14 03:12:09.843209] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.432 ms, result 0 00:25:24.039 true 00:25:24.039 03:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:24.298 { 00:25:24.298 "name": "ftl", 00:25:24.298 "properties": [ 00:25:24.298 { 00:25:24.298 "name": "superblock_version", 00:25:24.298 "value": 5, 00:25:24.298 "read-only": true 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "name": "base_device", 00:25:24.298 "bands": [ 00:25:24.298 { 00:25:24.298 "id": 0, 00:25:24.298 "state": "CLOSED", 00:25:24.298 "validity": 1.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 1, 00:25:24.298 "state": "CLOSED", 00:25:24.298 "validity": 1.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 2, 00:25:24.298 "state": "CLOSED", 00:25:24.298 "validity": 0.007843137254901933 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 3, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 4, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 5, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 6, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 7, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 8, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 9, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 10, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 11, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 12, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 13, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 14, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 15, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 16, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 17, 00:25:24.298 "state": "FREE", 00:25:24.298 "validity": 0.0 00:25:24.298 } 00:25:24.298 ], 00:25:24.298 "read-only": true 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "name": "cache_device", 00:25:24.298 "type": "bdev", 00:25:24.298 "chunks": [ 00:25:24.298 { 00:25:24.298 "id": 0, 00:25:24.298 "state": "OPEN", 00:25:24.298 "utilization": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 1, 00:25:24.298 "state": "OPEN", 00:25:24.298 "utilization": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 2, 00:25:24.298 "state": "FREE", 00:25:24.298 "utilization": 0.0 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "id": 3, 00:25:24.298 "state": "FREE", 00:25:24.298 "utilization": 0.0 00:25:24.298 } 00:25:24.298 ], 00:25:24.298 "read-only": true 00:25:24.298 }, 00:25:24.298 { 00:25:24.298 "name": "verbose_mode", 00:25:24.299 "value": true, 00:25:24.299 "unit": "", 00:25:24.299 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:24.299 }, 00:25:24.299 { 00:25:24.299 "name": "prep_upgrade_on_shutdown", 00:25:24.299 "value": false, 00:25:24.299 "unit": "", 00:25:24.299 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:24.299 } 00:25:24.299 ] 00:25:24.299 } 00:25:24.299 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:24.299 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:24.299 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:24.558 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:24.558 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:24.558 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:24.558 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:24.558 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:24.817 Validate MD5 checksum, iteration 1 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:24.817 03:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:24.817 [2024-05-14 03:12:10.683451] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:24.817 [2024-05-14 03:12:10.683574] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95775 ] 00:25:24.817 [2024-05-14 03:12:10.814991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.817 [2024-05-14 03:12:10.834632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.077 [2024-05-14 03:12:10.868470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.918  Copying: 500/1024 [MB] (500 MBps) Copying: 984/1024 [MB] (484 MBps) Copying: 1024/1024 [MB] (average 490 MBps) 00:25:29.918 00:25:29.918 03:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:29.918 03:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:31.297 Validate MD5 checksum, iteration 2 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=667b8a39bc53d18ed2884e1888b872ac 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 667b8a39bc53d18ed2884e1888b872ac != \6\6\7\b\8\a\3\9\b\c\5\3\d\1\8\e\d\2\8\8\4\e\1\8\8\8\b\8\7\2\a\c ]] 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:31.297 03:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:31.556 [2024-05-14 03:12:17.381235] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:31.556 [2024-05-14 03:12:17.381410] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95843 ] 00:25:31.556 [2024-05-14 03:12:17.528900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:31.556 [2024-05-14 03:12:17.551896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.815 [2024-05-14 03:12:17.594497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.085  Copying: 492/1024 [MB] (492 MBps) Copying: 989/1024 [MB] (497 MBps) Copying: 1024/1024 [MB] (average 494 MBps) 00:25:35.085 00:25:35.085 03:12:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:35.085 03:12:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c7f65a782eaf1b1c20abf490754ae710 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c7f65a782eaf1b1c20abf490754ae710 != \c\7\f\6\5\a\7\8\2\e\a\f\1\b\1\c\2\0\a\b\f\4\9\0\7\5\4\a\e\7\1\0 ]] 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95742 ]] 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95742 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95904 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:36.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95904 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95904 ']' 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:36.988 03:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:36.988 [2024-05-14 03:12:22.810622] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:36.988 [2024-05-14 03:12:22.811537] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95904 ] 00:25:36.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 95742 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:25:36.988 [2024-05-14 03:12:22.958752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:36.988 [2024-05-14 03:12:22.978553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.988 [2024-05-14 03:12:23.011860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.247 [2024-05-14 03:12:23.236128] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:37.247 [2024-05-14 03:12:23.236262] bdev.c:8090:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:37.508 [2024-05-14 03:12:23.372493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.372542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:37.508 [2024-05-14 03:12:23.372606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:37.508 [2024-05-14 03:12:23.372626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.372692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.372710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:37.508 [2024-05-14 03:12:23.372720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:37.508 [2024-05-14 03:12:23.372735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.372765] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:37.508 [2024-05-14 03:12:23.373005] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:37.508 [2024-05-14 03:12:23.373029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.373043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:37.508 [2024-05-14 03:12:23.373062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:25:37.508 [2024-05-14 03:12:23.373071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.373640] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:37.508 [2024-05-14 03:12:23.376867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.376904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:37.508 [2024-05-14 03:12:23.376919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.229 ms 00:25:37.508 [2024-05-14 03:12:23.376934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.377798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.377836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:37.508 [2024-05-14 03:12:23.377850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:37.508 [2024-05-14 03:12:23.377860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.378343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.378374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:37.508 [2024-05-14 03:12:23.378396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:25:37.508 [2024-05-14 03:12:23.378412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.378475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.378524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:37.508 [2024-05-14 03:12:23.378573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:25:37.508 [2024-05-14 03:12:23.378604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.378644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.378658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:37.508 [2024-05-14 03:12:23.378668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:37.508 [2024-05-14 03:12:23.378706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.378737] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:37.508 [2024-05-14 03:12:23.379679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.379702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:37.508 [2024-05-14 03:12:23.379716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.948 ms 00:25:37.508 [2024-05-14 03:12:23.379726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.379753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.508 [2024-05-14 03:12:23.379775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:37.508 [2024-05-14 03:12:23.379791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:37.508 [2024-05-14 03:12:23.379804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.508 [2024-05-14 03:12:23.379832] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:37.508 [2024-05-14 03:12:23.379864] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:25:37.508 [2024-05-14 03:12:23.379905] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:37.508 [2024-05-14 03:12:23.379924] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:25:37.508 [2024-05-14 03:12:23.379988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:37.508 [2024-05-14 03:12:23.380010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:37.509 [2024-05-14 03:12:23.380026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:37.509 [2024-05-14 03:12:23.380038] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380049] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380058] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:37.509 [2024-05-14 03:12:23.380069] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:37.509 [2024-05-14 03:12:23.380079] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:37.509 [2024-05-14 03:12:23.380087] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:37.509 [2024-05-14 03:12:23.380096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.509 [2024-05-14 03:12:23.380113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:37.509 [2024-05-14 03:12:23.380123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:25:37.509 [2024-05-14 03:12:23.380131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.509 [2024-05-14 03:12:23.380277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.509 [2024-05-14 03:12:23.380296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:37.509 [2024-05-14 03:12:23.380306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:25:37.509 [2024-05-14 03:12:23.380316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.509 [2024-05-14 03:12:23.380396] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:37.509 [2024-05-14 03:12:23.380412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:37.509 [2024-05-14 03:12:23.380433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:37.509 [2024-05-14 03:12:23.380465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:37.509 [2024-05-14 03:12:23.380500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:37.509 [2024-05-14 03:12:23.380510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:37.509 [2024-05-14 03:12:23.380519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:37.509 [2024-05-14 03:12:23.380537] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:37.509 [2024-05-14 03:12:23.380546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:37.509 [2024-05-14 03:12:23.380582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:37.509 [2024-05-14 03:12:23.380622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:37.509 [2024-05-14 03:12:23.380630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:37.509 [2024-05-14 03:12:23.380648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:37.509 [2024-05-14 03:12:23.380659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:37.509 [2024-05-14 03:12:23.380677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:37.509 [2024-05-14 03:12:23.380685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:37.509 [2024-05-14 03:12:23.380702] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:37.509 [2024-05-14 03:12:23.380711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:37.509 [2024-05-14 03:12:23.380728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:37.509 [2024-05-14 03:12:23.380736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:37.509 [2024-05-14 03:12:23.380753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:37.509 [2024-05-14 03:12:23.380762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:37.509 [2024-05-14 03:12:23.380779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:37.509 [2024-05-14 03:12:23.380787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:37.509 [2024-05-14 03:12:23.380808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380824] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:37.509 [2024-05-14 03:12:23.380845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:37.509 [2024-05-14 03:12:23.380854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:37.509 [2024-05-14 03:12:23.380872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:37.509 [2024-05-14 03:12:23.380882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:37.509 [2024-05-14 03:12:23.380891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:37.509 [2024-05-14 03:12:23.380900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:37.509 [2024-05-14 03:12:23.380908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:37.509 [2024-05-14 03:12:23.380917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:37.509 [2024-05-14 03:12:23.380936] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:37.509 [2024-05-14 03:12:23.380947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.380959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:37.509 [2024-05-14 03:12:23.380972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.380982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.380992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:37.509 [2024-05-14 03:12:23.381001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:37.509 [2024-05-14 03:12:23.381011] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:37.509 [2024-05-14 03:12:23.381020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:37.509 [2024-05-14 03:12:23.381030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.381039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.381048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.381057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.381067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:37.509 [2024-05-14 03:12:23.381077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:37.509 [2024-05-14 03:12:23.381085] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:37.509 [2024-05-14 03:12:23.381096] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.381106] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.509 [2024-05-14 03:12:23.381116] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:37.509 [2024-05-14 03:12:23.381128] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:37.509 [2024-05-14 03:12:23.381138] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:37.509 [2024-05-14 03:12:23.381148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.509 [2024-05-14 03:12:23.381158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:37.509 [2024-05-14 03:12:23.381168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:25:37.509 [2024-05-14 03:12:23.381180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.509 [2024-05-14 03:12:23.385543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.509 [2024-05-14 03:12:23.385577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:37.509 [2024-05-14 03:12:23.385598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.288 ms 00:25:37.509 [2024-05-14 03:12:23.385607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.509 [2024-05-14 03:12:23.385647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.509 [2024-05-14 03:12:23.385659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:37.509 [2024-05-14 03:12:23.385669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:37.509 [2024-05-14 03:12:23.385677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.509 [2024-05-14 03:12:23.393593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.509 [2024-05-14 03:12:23.393638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:37.509 [2024-05-14 03:12:23.393653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.857 ms 00:25:37.509 [2024-05-14 03:12:23.393663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.509 [2024-05-14 03:12:23.393697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.393710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:37.510 [2024-05-14 03:12:23.393719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:37.510 [2024-05-14 03:12:23.393734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.393846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.393869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:37.510 [2024-05-14 03:12:23.393879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:25:37.510 [2024-05-14 03:12:23.393887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.393928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.393955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:37.510 [2024-05-14 03:12:23.393966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:37.510 [2024-05-14 03:12:23.393974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.399647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.399682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:37.510 [2024-05-14 03:12:23.399714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.641 ms 00:25:37.510 [2024-05-14 03:12:23.399723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.399851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.399875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:37.510 [2024-05-14 03:12:23.399885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:37.510 [2024-05-14 03:12:23.399899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.403532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.403585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:37.510 [2024-05-14 03:12:23.403626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.588 ms 00:25:37.510 [2024-05-14 03:12:23.403643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.404795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.404829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:37.510 [2024-05-14 03:12:23.404858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.135 ms 00:25:37.510 [2024-05-14 03:12:23.404867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.422557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.422614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:37.510 [2024-05-14 03:12:23.422648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.626 ms 00:25:37.510 [2024-05-14 03:12:23.422657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.422756] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:37.510 [2024-05-14 03:12:23.422801] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:37.510 [2024-05-14 03:12:23.422839] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:37.510 [2024-05-14 03:12:23.422878] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:37.510 [2024-05-14 03:12:23.422892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.422903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:37.510 [2024-05-14 03:12:23.422917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:25:37.510 [2024-05-14 03:12:23.422926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.422997] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:37.510 [2024-05-14 03:12:23.423015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.423025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:37.510 [2024-05-14 03:12:23.423035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:37.510 [2024-05-14 03:12:23.423044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.425726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.425765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:37.510 [2024-05-14 03:12:23.425796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.656 ms 00:25:37.510 [2024-05-14 03:12:23.425812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.426506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.426558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:37.510 [2024-05-14 03:12:23.426573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:37.510 [2024-05-14 03:12:23.426583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.426627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:37.510 [2024-05-14 03:12:23.426640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:25:37.510 [2024-05-14 03:12:23.426662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:37.510 [2024-05-14 03:12:23.426671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:37.510 [2024-05-14 03:12:23.426829] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:25:38.077 [2024-05-14 03:12:23.996651] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:25:38.077 [2024-05-14 03:12:23.996858] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:25:38.645 [2024-05-14 03:12:24.564247] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:25:38.645 [2024-05-14 03:12:24.564396] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:38.645 [2024-05-14 03:12:24.564418] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:38.645 [2024-05-14 03:12:24.564433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.645 [2024-05-14 03:12:24.564459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:25:38.646 [2024-05-14 03:12:24.564475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1137.744 ms 00:25:38.646 [2024-05-14 03:12:24.564485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.564556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.564598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:25:38.646 [2024-05-14 03:12:24.564618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:38.646 [2024-05-14 03:12:24.564626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.571375] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:38.646 [2024-05-14 03:12:24.571506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.571524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:38.646 [2024-05-14 03:12:24.571535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.861 ms 00:25:38.646 [2024-05-14 03:12:24.571544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.572122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.572158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:25:38.646 [2024-05-14 03:12:24.572170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:25:38.646 [2024-05-14 03:12:24.572179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.574510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.574539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:25:38.646 [2024-05-14 03:12:24.574568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.294 ms 00:25:38.646 [2024-05-14 03:12:24.574577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.577790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.577827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:25:38.646 [2024-05-14 03:12:24.577842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.180 ms 00:25:38.646 [2024-05-14 03:12:24.577863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.577947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.577964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:38.646 [2024-05-14 03:12:24.577975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:38.646 [2024-05-14 03:12:24.577993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.579655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.579689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:38.646 [2024-05-14 03:12:24.579702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.641 ms 00:25:38.646 [2024-05-14 03:12:24.579711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.579743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.579756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:38.646 [2024-05-14 03:12:24.579764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:38.646 [2024-05-14 03:12:24.579773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.579841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:38.646 [2024-05-14 03:12:24.579856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.579874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:38.646 [2024-05-14 03:12:24.579887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:38.646 [2024-05-14 03:12:24.579896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.579950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.646 [2024-05-14 03:12:24.579964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:38.646 [2024-05-14 03:12:24.579974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:25:38.646 [2024-05-14 03:12:24.579982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.646 [2024-05-14 03:12:24.581092] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1208.108 ms, result 0 00:25:38.646 [2024-05-14 03:12:24.595997] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.646 [2024-05-14 03:12:24.611993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:38.646 [2024-05-14 03:12:24.619895] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:38.646 [2024-05-14 03:12:24.620349] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:25:38.646 Validate MD5 checksum, iteration 1 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:38.646 03:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:38.905 [2024-05-14 03:12:24.748651] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:38.905 [2024-05-14 03:12:24.749056] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95926 ] 00:25:38.905 [2024-05-14 03:12:24.897671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:38.905 [2024-05-14 03:12:24.915908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.163 [2024-05-14 03:12:24.959727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.042  Copying: 497/1024 [MB] (497 MBps) Copying: 994/1024 [MB] (497 MBps) Copying: 1024/1024 [MB] (average 495 MBps) 00:25:42.042 00:25:42.042 03:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:42.042 03:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:43.947 Validate MD5 checksum, iteration 2 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=667b8a39bc53d18ed2884e1888b872ac 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 667b8a39bc53d18ed2884e1888b872ac != \6\6\7\b\8\a\3\9\b\c\5\3\d\1\8\e\d\2\8\8\4\e\1\8\8\8\b\8\7\2\a\c ]] 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:43.947 03:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:43.947 [2024-05-14 03:12:29.764783] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:43.947 [2024-05-14 03:12:29.765618] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95984 ] 00:25:43.947 [2024-05-14 03:12:29.898077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:43.947 [2024-05-14 03:12:29.922313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.947 [2024-05-14 03:12:29.967110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.962  Copying: 511/1024 [MB] (511 MBps) Copying: 976/1024 [MB] (465 MBps) Copying: 1024/1024 [MB] (average 489 MBps) 00:25:46.962 00:25:46.962 03:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:46.962 03:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c7f65a782eaf1b1c20abf490754ae710 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c7f65a782eaf1b1c20abf490754ae710 != \c\7\f\6\5\a\7\8\2\e\a\f\1\b\1\c\2\0\a\b\f\4\9\0\7\5\4\a\e\7\1\0 ]] 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95904 ]] 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95904 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 95904 ']' 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 95904 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95904 00:25:48.864 killing process with pid 95904 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95904' 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 95904 00:25:48.864 [2024-05-14 03:12:34.824694] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:48.864 03:12:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 95904 00:25:49.123 [2024-05-14 03:12:34.915145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:49.123 [2024-05-14 03:12:34.919537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.123 [2024-05-14 03:12:34.919579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:49.123 [2024-05-14 03:12:34.919597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:49.123 [2024-05-14 03:12:34.919606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.123 [2024-05-14 03:12:34.919632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:49.123 [2024-05-14 03:12:34.920039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.123 [2024-05-14 03:12:34.920057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:49.123 [2024-05-14 03:12:34.920067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:25:49.123 [2024-05-14 03:12:34.920084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.123 [2024-05-14 03:12:34.920546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.123 [2024-05-14 03:12:34.920699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:49.123 [2024-05-14 03:12:34.920808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.438 ms 00:25:49.123 [2024-05-14 03:12:34.920943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.123 [2024-05-14 03:12:34.922126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.123 [2024-05-14 03:12:34.922332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:49.123 [2024-05-14 03:12:34.922472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.122 ms 00:25:49.123 [2024-05-14 03:12:34.922502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.123 [2024-05-14 03:12:34.923697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.123 [2024-05-14 03:12:34.923834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:25:49.123 [2024-05-14 03:12:34.923931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.113 ms 00:25:49.123 [2024-05-14 03:12:34.924023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.123 [2024-05-14 03:12:34.925436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.123 [2024-05-14 03:12:34.925608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:49.123 [2024-05-14 03:12:34.925715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.322 ms 00:25:49.123 [2024-05-14 03:12:34.925809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.123 [2024-05-14 03:12:34.926861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.926924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:49.124 [2024-05-14 03:12:34.926955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.000 ms 00:25:49.124 [2024-05-14 03:12:34.926965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.927038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.927054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:49.124 [2024-05-14 03:12:34.927071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:49.124 [2024-05-14 03:12:34.927080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.928532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.928736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:49.124 [2024-05-14 03:12:34.928858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.431 ms 00:25:49.124 [2024-05-14 03:12:34.928973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.930213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.930433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:49.124 [2024-05-14 03:12:34.930549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.164 ms 00:25:49.124 [2024-05-14 03:12:34.930625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.931888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.932040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:49.124 [2024-05-14 03:12:34.932185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.126 ms 00:25:49.124 [2024-05-14 03:12:34.932260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.933560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.933756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:49.124 [2024-05-14 03:12:34.933855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:25:49.124 [2024-05-14 03:12:34.933899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.933994] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:49.124 [2024-05-14 03:12:34.934019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:49.124 [2024-05-14 03:12:34.934044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:49.124 [2024-05-14 03:12:34.934054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:49.124 [2024-05-14 03:12:34.934065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:49.124 [2024-05-14 03:12:34.934251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:49.124 [2024-05-14 03:12:34.934266] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 75efb898-7f58-4054-9e83-434893bc1140 00:25:49.124 [2024-05-14 03:12:34.934277] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:49.124 [2024-05-14 03:12:34.934286] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:49.124 [2024-05-14 03:12:34.934295] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:49.124 [2024-05-14 03:12:34.934305] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:49.124 [2024-05-14 03:12:34.934325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:49.124 [2024-05-14 03:12:34.934344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:49.124 [2024-05-14 03:12:34.934354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:49.124 [2024-05-14 03:12:34.934363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:49.124 [2024-05-14 03:12:34.934371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:49.124 [2024-05-14 03:12:34.934380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.934390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:49.124 [2024-05-14 03:12:34.934401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:25:49.124 [2024-05-14 03:12:34.934410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.935784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.935927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:49.124 [2024-05-14 03:12:34.936024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.350 ms 00:25:49.124 [2024-05-14 03:12:34.936123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.936273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.124 [2024-05-14 03:12:34.936323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:49.124 [2024-05-14 03:12:34.936427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:25:49.124 [2024-05-14 03:12:34.936537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.941660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.941822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:49.124 [2024-05-14 03:12:34.941926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.942028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.942097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.942262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:49.124 [2024-05-14 03:12:34.942314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.942419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.942556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.942616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:49.124 [2024-05-14 03:12:34.942721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.942750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.942782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.942795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:49.124 [2024-05-14 03:12:34.942804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.942813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.950853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.951044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:49.124 [2024-05-14 03:12:34.951220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.951344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.954692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.954726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:49.124 [2024-05-14 03:12:34.954741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.954750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.954823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.954839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:49.124 [2024-05-14 03:12:34.954848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.954856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.954885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.954902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:49.124 [2024-05-14 03:12:34.954911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.954919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.954999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.955015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:49.124 [2024-05-14 03:12:34.955025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.955033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.955078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.955097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:49.124 [2024-05-14 03:12:34.955107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.955115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.955208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.124 [2024-05-14 03:12:34.955222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:49.124 [2024-05-14 03:12:34.955232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.124 [2024-05-14 03:12:34.955241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.124 [2024-05-14 03:12:34.955289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:49.125 [2024-05-14 03:12:34.955308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:49.125 [2024-05-14 03:12:34.955318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:49.125 [2024-05-14 03:12:34.955327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.125 [2024-05-14 03:12:34.955486] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 35.899 ms, result 0 00:25:49.125 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:49.125 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:49.383 Remove shared memory files 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95742 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:49.383 ************************************ 00:25:49.383 END TEST ftl_upgrade_shutdown 00:25:49.383 ************************************ 00:25:49.383 00:25:49.383 real 1m8.771s 00:25:49.383 user 1m35.029s 00:25:49.383 sys 0m20.954s 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:49.383 03:12:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:49.383 Process with pid 88765 is not found 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:25:49.383 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:25:49.383 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@14 -- # killprocess 88765 00:25:49.383 03:12:35 ftl -- common/autotest_common.sh@946 -- # '[' -z 88765 ']' 00:25:49.383 03:12:35 ftl -- common/autotest_common.sh@950 -- # kill -0 88765 00:25:49.383 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (88765) - No such process 00:25:49.383 03:12:35 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 88765 is not found' 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96080 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:49.383 03:12:35 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96080 00:25:49.383 03:12:35 ftl -- common/autotest_common.sh@827 -- # '[' -z 96080 ']' 00:25:49.383 03:12:35 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.384 03:12:35 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:49.384 03:12:35 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.384 03:12:35 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:49.384 03:12:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:49.384 [2024-05-14 03:12:35.321606] Starting SPDK v24.05-pre git sha1 1826c4dc5 / DPDK 24.07.0-rc0 initialization... 00:25:49.384 [2024-05-14 03:12:35.322006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96080 ] 00:25:49.642 [2024-05-14 03:12:35.471508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:49.642 [2024-05-14 03:12:35.491226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.642 [2024-05-14 03:12:35.523602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.210 03:12:36 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:50.210 03:12:36 ftl -- common/autotest_common.sh@860 -- # return 0 00:25:50.210 03:12:36 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:50.470 nvme0n1 00:25:50.470 03:12:36 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:25:50.470 03:12:36 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:50.470 03:12:36 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:50.729 03:12:36 ftl -- ftl/common.sh@28 -- # stores=12adeba0-ef7d-43e2-83cc-1c6e94754060 00:25:50.729 03:12:36 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:25:50.729 03:12:36 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12adeba0-ef7d-43e2-83cc-1c6e94754060 00:25:50.989 03:12:36 ftl -- ftl/ftl.sh@23 -- # killprocess 96080 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@946 -- # '[' -z 96080 ']' 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@950 -- # kill -0 96080 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@951 -- # uname 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96080 00:25:50.989 killing process with pid 96080 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96080' 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@965 -- # kill 96080 00:25:50.989 03:12:36 ftl -- common/autotest_common.sh@970 -- # wait 96080 00:25:51.248 03:12:37 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:51.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.507 Waiting for block devices as requested 00:25:51.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.767 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.767 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.767 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:57.038 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:57.038 03:12:42 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:25:57.038 03:12:42 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:57.038 Remove shared memory files 00:25:57.038 03:12:42 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:25:57.038 03:12:42 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:25:57.038 03:12:42 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:25:57.038 03:12:42 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:57.038 03:12:42 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:25:57.038 00:25:57.038 real 10m44.728s 00:25:57.038 user 13m13.063s 00:25:57.038 sys 1m23.718s 00:25:57.038 03:12:42 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:57.038 03:12:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:57.038 ************************************ 00:25:57.038 END TEST ftl 00:25:57.038 ************************************ 00:25:57.038 03:12:42 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:57.038 03:12:42 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:57.038 03:12:42 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:25:57.038 03:12:42 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:57.038 03:12:42 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:25:57.038 03:12:42 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:57.038 03:12:42 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:57.038 03:12:42 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:57.038 03:12:42 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:25:57.038 03:12:42 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:25:57.038 03:12:42 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:57.038 03:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:57.038 03:12:42 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:25:57.038 03:12:42 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:25:57.038 03:12:42 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:25:57.038 03:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:58.416 INFO: APP EXITING 00:25:58.416 INFO: killing all VMs 00:25:58.416 INFO: killing vhost app 00:25:58.416 INFO: EXIT DONE 00:25:58.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.241 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:59.241 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:59.241 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:59.241 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:59.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:00.067 Cleaning 00:26:00.067 Removing: /var/run/dpdk/spdk0/config 00:26:00.067 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:00.067 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:00.067 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:00.067 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:00.067 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:00.067 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:00.067 Removing: /var/run/dpdk/spdk0 00:26:00.067 Removing: /var/run/dpdk/spdk_pid74920 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75077 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75269 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75357 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75385 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75497 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75517 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75670 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75735 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75812 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75899 00:26:00.067 Removing: /var/run/dpdk/spdk_pid75971 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76011 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76042 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76104 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76205 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76634 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76682 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76728 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76744 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76808 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76824 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76892 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76903 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76951 00:26:00.067 Removing: /var/run/dpdk/spdk_pid76969 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77011 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77029 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77152 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77190 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77260 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77319 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77339 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77406 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77436 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77477 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77507 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77543 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77578 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77608 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77649 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77679 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77715 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77750 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77786 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77821 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77851 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77887 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77922 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77958 00:26:00.067 Removing: /var/run/dpdk/spdk_pid77996 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78035 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78065 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78107 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78171 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78261 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78406 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78479 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78510 00:26:00.327 Removing: /var/run/dpdk/spdk_pid78943 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79030 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79134 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79171 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79196 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79272 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79878 00:26:00.327 Removing: /var/run/dpdk/spdk_pid79909 00:26:00.327 Removing: /var/run/dpdk/spdk_pid80393 00:26:00.327 Removing: /var/run/dpdk/spdk_pid80475 00:26:00.327 Removing: /var/run/dpdk/spdk_pid80574 00:26:00.327 Removing: /var/run/dpdk/spdk_pid80616 00:26:00.327 Removing: /var/run/dpdk/spdk_pid80641 00:26:00.327 Removing: /var/run/dpdk/spdk_pid80667 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82481 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82602 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82611 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82623 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82668 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82672 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82684 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82723 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82727 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82739 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82784 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82788 00:26:00.327 Removing: /var/run/dpdk/spdk_pid82800 00:26:00.327 Removing: /var/run/dpdk/spdk_pid84147 00:26:00.327 Removing: /var/run/dpdk/spdk_pid84225 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85108 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85454 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85537 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85611 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85686 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85782 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85851 00:26:00.327 Removing: /var/run/dpdk/spdk_pid85974 00:26:00.327 Removing: /var/run/dpdk/spdk_pid86239 00:26:00.327 Removing: /var/run/dpdk/spdk_pid86269 00:26:00.327 Removing: /var/run/dpdk/spdk_pid86714 00:26:00.327 Removing: /var/run/dpdk/spdk_pid86888 00:26:00.327 Removing: /var/run/dpdk/spdk_pid86971 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87072 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87115 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87135 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87415 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87452 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87498 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87844 00:26:00.327 Removing: /var/run/dpdk/spdk_pid87988 00:26:00.327 Removing: /var/run/dpdk/spdk_pid88765 00:26:00.327 Removing: /var/run/dpdk/spdk_pid88873 00:26:00.327 Removing: /var/run/dpdk/spdk_pid89043 00:26:00.327 Removing: /var/run/dpdk/spdk_pid89129 00:26:00.327 Removing: /var/run/dpdk/spdk_pid89471 00:26:00.327 Removing: /var/run/dpdk/spdk_pid89727 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90067 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90238 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90364 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90400 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90538 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90552 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90588 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90781 00:26:00.327 Removing: /var/run/dpdk/spdk_pid90991 00:26:00.327 Removing: /var/run/dpdk/spdk_pid91432 00:26:00.327 Removing: /var/run/dpdk/spdk_pid91885 00:26:00.327 Removing: /var/run/dpdk/spdk_pid92336 00:26:00.327 Removing: /var/run/dpdk/spdk_pid92838 00:26:00.327 Removing: /var/run/dpdk/spdk_pid92969 00:26:00.327 Removing: /var/run/dpdk/spdk_pid93062 00:26:00.327 Removing: /var/run/dpdk/spdk_pid93751 00:26:00.327 Removing: /var/run/dpdk/spdk_pid93810 00:26:00.586 Removing: /var/run/dpdk/spdk_pid94296 00:26:00.586 Removing: /var/run/dpdk/spdk_pid94715 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95259 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95370 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95395 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95458 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95504 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95569 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95742 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95775 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95843 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95904 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95926 00:26:00.586 Removing: /var/run/dpdk/spdk_pid95984 00:26:00.586 Removing: /var/run/dpdk/spdk_pid96080 00:26:00.586 Clean 00:26:00.586 03:12:46 -- common/autotest_common.sh@1447 -- # return 0 00:26:00.586 03:12:46 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:26:00.586 03:12:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.586 03:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:00.586 03:12:46 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:26:00.586 03:12:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.586 03:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:00.586 03:12:46 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:00.586 03:12:46 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:00.586 03:12:46 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:00.586 03:12:46 -- spdk/autotest.sh@387 -- # hash lcov 00:26:00.586 03:12:46 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:00.586 03:12:46 -- spdk/autotest.sh@389 -- # hostname 00:26:00.586 03:12:46 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:00.845 geninfo: WARNING: invalid characters removed from testname! 00:26:22.806 03:13:07 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:24.710 03:13:10 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:27.240 03:13:12 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:29.141 03:13:14 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:31.674 03:13:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:33.580 03:13:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:36.113 03:13:21 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:36.113 03:13:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.113 03:13:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:36.113 03:13:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.113 03:13:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.113 03:13:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.113 03:13:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.113 03:13:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.113 03:13:21 -- paths/export.sh@5 -- $ export PATH 00:26:36.113 03:13:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.113 03:13:21 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:36.113 03:13:21 -- common/autobuild_common.sh@437 -- $ date +%s 00:26:36.113 03:13:21 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715656401.XXXXXX 00:26:36.113 03:13:21 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715656401.Uu7Fnk 00:26:36.113 03:13:21 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:26:36.113 03:13:21 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:26:36.113 03:13:21 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:26:36.113 03:13:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:26:36.113 03:13:21 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:36.113 03:13:21 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:36.113 03:13:21 -- common/autobuild_common.sh@453 -- $ get_config_params 00:26:36.113 03:13:21 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:26:36.113 03:13:21 -- common/autotest_common.sh@10 -- $ set +x 00:26:36.113 03:13:21 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:26:36.113 03:13:21 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:26:36.113 03:13:21 -- pm/common@17 -- $ local monitor 00:26:36.113 03:13:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:36.113 03:13:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:36.113 03:13:21 -- pm/common@25 -- $ sleep 1 00:26:36.113 03:13:21 -- pm/common@21 -- $ date +%s 00:26:36.113 03:13:21 -- pm/common@21 -- $ date +%s 00:26:36.113 03:13:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715656401 00:26:36.113 03:13:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715656401 00:26:36.113 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715656401_collect-vmstat.pm.log 00:26:36.113 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715656401_collect-cpu-load.pm.log 00:26:37.052 03:13:22 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:26:37.052 03:13:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:37.052 03:13:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:37.052 03:13:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:37.052 03:13:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:26:37.052 03:13:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:37.052 03:13:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:37.052 03:13:22 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:37.052 03:13:22 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:37.052 03:13:22 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:37.052 03:13:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:37.052 03:13:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:37.052 03:13:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:37.052 03:13:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:37.052 03:13:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:37.052 03:13:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:37.052 03:13:22 -- pm/common@44 -- $ pid=97747 00:26:37.052 03:13:22 -- pm/common@50 -- $ kill -TERM 97747 00:26:37.052 03:13:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:37.052 03:13:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:37.052 03:13:22 -- pm/common@44 -- $ pid=97748 00:26:37.052 03:13:22 -- pm/common@50 -- $ kill -TERM 97748 00:26:37.052 + [[ -n 5922 ]] 00:26:37.052 + sudo kill 5922 00:26:37.062 [Pipeline] } 00:26:37.081 [Pipeline] // timeout 00:26:37.086 [Pipeline] } 00:26:37.104 [Pipeline] // stage 00:26:37.110 [Pipeline] } 00:26:37.126 [Pipeline] // catchError 00:26:37.136 [Pipeline] stage 00:26:37.138 [Pipeline] { (Stop VM) 00:26:37.152 [Pipeline] sh 00:26:37.432 + vagrant halt 00:26:39.964 ==> default: Halting domain... 00:26:46.539 [Pipeline] sh 00:26:46.817 + vagrant destroy -f 00:26:49.347 ==> default: Removing domain... 00:26:49.926 [Pipeline] sh 00:26:50.209 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:26:50.219 [Pipeline] } 00:26:50.238 [Pipeline] // stage 00:26:50.243 [Pipeline] } 00:26:50.260 [Pipeline] // dir 00:26:50.265 [Pipeline] } 00:26:50.282 [Pipeline] // wrap 00:26:50.288 [Pipeline] } 00:26:50.303 [Pipeline] // catchError 00:26:50.312 [Pipeline] stage 00:26:50.314 [Pipeline] { (Epilogue) 00:26:50.329 [Pipeline] sh 00:26:50.612 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:55.957 [Pipeline] catchError 00:26:55.959 [Pipeline] { 00:26:55.973 [Pipeline] sh 00:26:56.255 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:56.514 Artifacts sizes are good 00:26:56.523 [Pipeline] } 00:26:56.539 [Pipeline] // catchError 00:26:56.549 [Pipeline] archiveArtifacts 00:26:56.556 Archiving artifacts 00:26:56.697 [Pipeline] cleanWs 00:26:56.708 [WS-CLEANUP] Deleting project workspace... 00:26:56.708 [WS-CLEANUP] Deferred wipeout is used... 00:26:56.715 [WS-CLEANUP] done 00:26:56.717 [Pipeline] } 00:26:56.733 [Pipeline] // stage 00:26:56.739 [Pipeline] } 00:26:56.754 [Pipeline] // node 00:26:56.759 [Pipeline] End of Pipeline 00:26:56.796 Finished: SUCCESS